Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 54
Filter
1.
J Acoust Soc Am ; 155(4): 2482-2491, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38587430

ABSTRACT

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired). The task was conducted in quiet or in a background of same-sex two-talker speech babble. We found that age and hearing loss had detrimental effects on talker change detection, in addition to their expected effects on word recognition. We also found subtle differences in the effects of age and hearing loss for trials in which the talker changed vs trials in which the talker did not change. These findings suggest that part of the difficulty encountered by older listeners, and by listeners with hearing loss, when communicating in group situations, may be due to a reduced ability to identify and discriminate between the participants in the conversation.


Subject(s)
Deafness , Hearing Loss , Humans , Hearing Loss/diagnosis , Speech Intelligibility
2.
J Acoust Soc Am ; 154(5): 3328-3343, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37983296

ABSTRACT

This study investigated word recognition for sentences temporally filtered within and across acoustic-phonetic segments providing primarily vocalic or consonantal cues. Amplitude modulation was filtered at syllabic (0-8 Hz) or slow phonemic (8-16 Hz) rates. Sentence-level modulation properties were also varied by amplifying or attenuating segments. Participants were older adults with normal or impaired hearing. Older adult speech recognition was compared to groups of younger normal-hearing adults who heard speech unmodified or spectrally shaped with and without threshold matching noise that matched audibility to hearing-impaired thresholds. Participants also completed cognitive and speech recognition measures. Overall, results confirm the primary contribution of syllabic speech modulations to recognition and demonstrate the importance of these modulations across vowel and consonant segments. Group differences demonstrated a hearing loss-related impairment in processing modulation-filtered speech, particularly at 8-16 Hz. This impairment could not be fully explained by age or poorer audibility. Principal components analysis identified a single factor score that summarized speech recognition across modulation-filtered conditions; analysis of individual differences explained 81% of the variance in this summary factor among the older adults with hearing loss. These results suggest that a combination of cognitive abilities and speech glimpsing abilities contribute to speech recognition in this group.


Subject(s)
Hearing Loss , Speech Perception , Humans , Aged , Speech , Age Factors , Hearing Loss/diagnosis , Hearing Loss/psychology , Cognition
3.
JASA Express Lett ; 3(4)2023 04 01.
Article in English | MEDLINE | ID: mdl-37096892

ABSTRACT

This study examined the recognition of spectrally shaped syllables and sentences in speech-modulated noise by younger and older adults. The effect of spectral shaping and speech level on temporal amplitude modulation cues was explored through speech vocoding. Subclinical differences in hearing thresholds in older adults were controlled using threshold matching noise. Older, compared to younger, adults had poorer recognition but similar improvements as the bandwidth of the shaping function increased. Spectral shaping may enhance the sensation level of glimpsed speech, which improves speech recognition in noise, even with mild elevations in hearing thresholds.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Humans , Aged , Speech , Auditory Threshold , Noise , Hearing
4.
Neuroimage ; 253: 119042, 2022 06.
Article in English | MEDLINE | ID: mdl-35259524

ABSTRACT

Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.


Subject(s)
Speech Perception , Aged , Humans , Middle Aged , Noise , Perceptual Masking , Recognition, Psychology/physiology , Signal-To-Noise Ratio , Speech , Speech Perception/physiology
5.
Brain Struct Funct ; 227(1): 203-218, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34632538

ABSTRACT

Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.


Subject(s)
Hearing Loss , Speech Perception , Aged , Aged, 80 and over , Cognition , Cues , Deafness , Hearing Loss, High-Frequency , Humans , Middle Aged , Speech
6.
J Neurosci ; 41(50): 10293-10304, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34753738

ABSTRACT

A common complaint of older adults is difficulty understanding speech, particularly in challenging listening conditions. Accumulating evidence suggests that these difficulties may reflect a loss and/or dysfunction of auditory nerve (AN) fibers. We used a novel approach to study age-related changes in AN structure and several measures of AN function, including neural synchrony, in 58 older adults and 42 younger adults. AN activity was measured in response to an auditory click (compound action potential; CAP), presented at stimulus levels ranging from 70 to 110 dB pSPL. Poorer AN function was observed for older than younger adults across CAP measures at higher but not lower stimulus levels. Associations across metrics and stimulus levels were consistent with age-related AN disengagement and AN dyssynchrony. High-resolution T2-weighted structural imaging revealed age-related differences in the density of cranial nerve VIII, with lower density in older adults with poorer neural synchrony. Individual differences in neural synchrony were the strongest predictor of speech recognition, such that poorer synchrony predicted poorer recognition of time-compressed speech and poorer speech recognition in noise for both younger and older adults. These results have broad clinical implications and are consistent with an interpretation that age-related atrophy at the level of the AN contributes to poorer neural synchrony and may explain some of the perceptual difficulties of older adults.SIGNIFICANCE STATEMENT Differences in auditory nerve (AN) pathophysiology may contribute to the large variations in hearing and communication abilities of older adults. However, current diagnostics focus largely on the increase in detection thresholds, which is likely because of the absence of indirect measures of AN function in standard clinical test batteries. Using novel metrics of AN function, combined with estimates of AN structure and auditory function, we identified age-related differences across measures that we interpret to represent age-related reductions in AN engagement and poorer neural synchrony. Structure-function associations are consistent with an explanation of AN deficits that arise from age-related atrophy of the AN. Associations between neural synchrony and speech recognition suggest that individual and age-related deficits in neural synchrony contribute to speech recognition deficits.


Subject(s)
Cochlear Nerve/physiopathology , Presbycusis/physiopathology , Age Factors , Aged , Aged, 80 and over , Audiometry , Auditory Threshold/physiology , Electroencephalography , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged
7.
J Acoust Soc Am ; 150(3): 1979, 2021 09.
Article in English | MEDLINE | ID: mdl-34598610

ABSTRACT

This study investigated how acoustic and lexical word-level factors and listener-level factors of auditory thresholds and cognitive-linguistic processing contribute to the microstructure of sentence recognition in unmodulated and speech-modulated noise. The modulation depth of the modulated masker was changed by expanding and compressing the temporal envelope to control glimpsing opportunities. Younger adults with normal hearing (YNH) and older adults with normal and impaired hearing were tested. A second group of YNH was tested under acoustically identical conditions to the hearing-impaired group, who received spectral shaping. For all of the groups, speech recognition declined and masking release increased for later keywords in the sentence, which is consistent with the word position decreases in the signal-to-noise ratio. The acoustic glimpse proportion and lexical word frequency of individual keywords predicted recognition under different noise conditions. For the older adults, better auditory thresholds and better working memory abilities facilitated sentence recognition. Vocabulary knowledge contributed more to sentence recognition for younger than for older adults. These results demonstrate that acoustic and lexical factors contribute to the recognition of individual words within a sentence, but relative contributions vary based on the noise modulation characteristics. Taken together, acoustic, lexical, and listener factors contribute to how individuals recognize keywords during sentences.


Subject(s)
Speech Perception , Acoustics , Aged , Auditory Threshold , Hearing , Humans , Noise/adverse effects
8.
J Speech Lang Hear Res ; 63(12): 4289-4299, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33197359

ABSTRACT

Purpose This study investigated methods used to simulate factors associated with reduced audibility, increased speech levels, and spectral shaping for aided older adults with hearing loss. Simulations provided to younger normal-hearing adults were used to investigate the effect of sensation level, speech presentation level, and spectral shape in comparison to older adults with hearing loss. Method Measures were assessed in quiet, steady-state noise, and speech-modulated noise. Older adults with hearing loss listened to speech that was spectrally shaped according to their hearing thresholds. Younger adults with normal hearing listened to speech that simulated the hearing-impaired group's (a) reduced audibility, (b) increased speech levels, and (c) spectral shaping. Group comparisons were made based on speech recognition performance and masking release. Additionally, younger adults completed measures of listening effort and perceived speech quality to assess if differences across simulations in these outcome measures were similar to those for speech recognition. Results Across the various simulations employed, testing in the presence of a threshold matching noise best matched differences in speech recognition and masking release between younger and older adults. This result remained consistent across the other two outcome measures. Conclusions A combination of audibility, speech level, and spectral shape factors is required to simulate differences between listeners with normal and impaired hearing in recognition, listening effort, and perceived speech quality. The use of spectrally shaped and amplified speech in the presence of threshold matching noise best provided this simulated control. Supplemental Material https://doi.org/10.23641/asha.13224632.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Aged , Auditory Threshold , Hearing , Humans , Noise , Perceptual Masking , Speech
9.
Trends Hear ; 24: 2331216520915110, 2020.
Article in English | MEDLINE | ID: mdl-32372720

ABSTRACT

Focused attention on expected voice features, such as fundamental frequency (F0) and spectral envelope, may facilitate segregation and selection of a target talker in competing talker backgrounds. Age-related declines in attention may limit these abilities in older adults, resulting in poorer speech understanding in complex environments. To test this hypothesis, younger and older adults with normal hearing listened to sentences with a single competing talker. For most trials, listener attention was directed to the target by a cue phrase that matched the target talker's F0 and spectral envelope. For a small percentage of randomly occurring probe trials, the target's voice unexpectedly differed from the cue phrase in terms of F0 and spectral envelope. Overall, keyword recognition for the target talker was poorer for older adults than younger adults. Keyword recognition was poorer on probe trials than standard trials for both groups, and incorrect responses on probe trials contained keywords from the single-talker masker. No interaction was observed between age-group and the decline in keyword recognition on probe trials. Thus, reduced performance by older adults overall could not be attributed to declines in attention to an expected voice. Rather, other cognitive abilities, such as speed of processing and linguistic closure, were predictive of keyword recognition for younger and older adults. Moreover, the effects of age interacted with the sex of the target talker, such that older adults had greater difficulty understanding target keywords from female talkers than male talkers.


Subject(s)
Motivation , Speech Perception , Aged , Auditory Perception , Female , Hearing Tests , Humans , Male , Recognition, Psychology
10.
J Acoust Soc Am ; 147(1): 273, 2020 01.
Article in English | MEDLINE | ID: mdl-32006979

ABSTRACT

Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. This approach is called "the syllable-constituent, contextual theory of speech perception" and is supported by the performance of 57 hearing-aid users in the identification of 109 syllable constituents presented in a background of 12-talker babble and the identification of words in naturally spoken sentences presented in the same babble. A simple mathematical model, inspired in large part by Boothroyd and Nittrouer [(1988). J. Acoust. Soc. Am. 84, 101-114] and Fletcher [Allen (1996) J. Acoust. Soc. Am. 99, 1825-1834], predicts sentence perception from listeners' abilities to recognize isolated syllable constituents and to benefit from context. When the identification accuracy of syllable constituents is greater than about 55%, individual differences in context utilization play a minor role in determining the sentence scores. As syllable-constituent scores fall below 55%, individual differences in context utilization play an increasingly greater role in determining sentence scores. Implications for hearing-aid design goals and fitting procedures are discussed.


Subject(s)
Noise , Persons With Hearing Impairments/psychology , Phonetics , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Female , Hearing Aids , Humans , Male , Middle Aged , Perceptual Masking , Recognition, Psychology
11.
J Acoust Soc Am ; 145(3): EL173, 2019 03.
Article in English | MEDLINE | ID: mdl-31067962

ABSTRACT

Envelope and periodicity cues may provide redundant, additive, or synergistic benefits to speech recognition. The contributions of these cues may change under different listening conditions and may differ for younger and older adults. To address these questions, younger and older adults with normal hearing listened to interrupted sentences containing different combinations of envelope and periodicity cues in quiet and with a competing talker. Envelope and periodicity cues improved speech recognition for both groups, and their benefits were additive when both cues were available. Envelope cues were particularly important for older adults and for sentences with a competing talker.


Subject(s)
Aging/physiology , Cues , Speech Perception , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Periodicity , Signal-To-Noise Ratio
12.
J Acoust Soc Am ; 144(1): 267, 2018 07.
Article in English | MEDLINE | ID: mdl-30075693

ABSTRACT

In realistic listening environments, speech perception requires grouping together audible fragments of speech, filling in missing information, and segregating the glimpsed target from the background. The purpose of this study was to determine the extent to which age-related difficulties with these tasks can be explained by declines in glimpsing, phonemic restoration, and/or speech segregation. Younger and older adults with normal hearing listened to sentences interrupted with silence or envelope-modulated noise, presented either in quiet or with a competing talker. Older adults were poorer than younger adults at recognizing keywords based on short glimpses but benefited more when envelope-modulated noise filled silent intervals. Recognition declined with a competing talker but this effect did not interact with age. Results of cognitive tasks indicated that faster processing speed and better visual-linguistic closure were predictive of better speech understanding. Taken together, these results suggest that age-related declines in speech recognition may be partially explained by difficulty grouping short glimpses of speech into a coherent message.


Subject(s)
Age Factors , Hearing/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Aged , Aged, 80 and over , Auditory Perception/physiology , Cognition/physiology , Comprehension/physiology , Female , Hearing Tests , Humans , Male , Middle Aged , Noise , Perceptual Masking/physiology
13.
J Acoust Soc Am ; 143(4): 2232, 2018 04.
Article in English | MEDLINE | ID: mdl-29716275

ABSTRACT

This study tests the hypothesis that amplitude modulation (AM) detection will be better under conditions where basilar membrane (BM) response growth is expected to be linear rather than compressive. This hypothesis was tested by (1) comparing AM detection for a tonal carrier as a function of carrier level for subjects with and without cochlear hearing impairment (HI), and by (2) comparing AM detection for carriers presented with and without an ipsilateral notched-noise precursor, under the assumption that the precursor linearizes BM responses. Average AM detection thresholds were approximately 5 dB better for subjects with HI than for subjects with normal hearing (NH) at moderate-level carriers. Average AM detection for low-to-moderate level carriers was approximately 2 dB better with the precursor than without the precursor for subjects with NH, whereas precursor effects were absent or smaller for subjects with HI. Although effect sizes were small and individual differences were noted, group differences are consistent with better AM detection for conditions where BM responses are less compressive due to cochlear hearing loss or due to a reduction in cochlear gain. These findings suggest the auditory system may quickly adjust to the local soundscape to increase effective AM depth and improve signal-to-noise ratios.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Auditory Threshold/physiology , Hearing Loss/physiopathology , Hearing/physiology , Noise , Adult , Aged , Aged, 80 and over , Case-Control Studies , Female , Humans , Male , Middle Aged , Young Adult
14.
J Acoust Soc Am ; 143(2): 1085, 2018 02.
Article in English | MEDLINE | ID: mdl-29495693

ABSTRACT

The ability to identify who is talking is an important aspect of communication in social situations and, while empirical data are limited, it is possible that a disruption to this ability contributes to the difficulties experienced by listeners with hearing loss. In this study, talker identification was examined under both quiet and masked conditions. Subjects were grouped by hearing status (normal hearing/sensorineural hearing loss) and age (younger/older adults). Listeners first learned to identify the voices of four same-sex talkers in quiet, and then talker identification was assessed (1) in quiet, (2) in speech-shaped, steady-state noise, and (3) in the presence of a single, unfamiliar same-sex talker. Both younger and older adults with hearing loss, as well as older adults with normal hearing, generally performed more poorly than younger adults with normal hearing, although large individual differences were observed in all conditions. Regression analyses indicated that both age and hearing loss were predictors of performance in quiet, and there was some evidence for an additional contribution of hearing loss in the presence of masking. These findings suggest that both hearing loss and age may affect the ability to identify talkers in "cocktail party" situations.


Subject(s)
Aging/psychology , Hearing Loss/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Recognition, Psychology , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Speech , Boston , Female , Hearing , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Humans , Male , Middle Aged , South Carolina , Young Adult
15.
Laryngoscope ; 128(9): 2133-2138, 2018 09.
Article in English | MEDLINE | ID: mdl-29481695

ABSTRACT

OBJECTIVE: Identify factors associated with benefit of middle ear implants (MEIs) as compared to conventional hearing aids (HAs). STUDY DESIGN: Independent review of audiological data from a multicenter prospective U.S. Food and Drug Administration (FDA) clinical trial. Preoperative and postoperative earphone, unaided/aided/implanted pure-tone thresholds, and word recognition scores were evaluated. RESULTS: Ninety-one subjects were included in this study. Mean word recognition was better with MEIs than with HAs (81.8% ± 12.0% vs. 77.6% ± 14.6%, P = 0.035). Word recognition with MEIs showed a low positive correlation with word recognition measured with earphones (r = 0.25, P = 0.016) and a moderate positive correlation with aided word recognition (r = 0.42, P < 0.001). Earphone word recognition alone was not predictive of MEI benefit over HA benefit (r = 0.09, P = 0.41), unlike differences between scores with earphone and HAs (earphone-aided differences [EAD]) (r = 0.62, P < 0.011). As compared to those with -EADs, subjects with +EADs showed greater improvement in word recognition from unaided to implanted and from HAs to implanted (P < 0.0001). Using the 95% CI for word recognition scores, 16 subjects showed significantly higher scores with the MEI than with HAs. Of those, 14 had +EAD. CONCLUSION: Word recognition benefit derived from conventional HAs and MEIs from this large, multi-center FDA trial provides further evidence of the importance of aided word recognition in clinical decision making, such as determining candidacy for and success with MEIs. LEVEL OF EVIDENCE: 2b. Laryngoscope, 128:2133-2138, 2018.


Subject(s)
Ear, Middle/surgery , Hearing Aids/statistics & numerical data , Hearing Loss, Sensorineural/therapy , Ossicular Prosthesis/statistics & numerical data , Adult , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Auditory Threshold , Ear, Middle/physiopathology , Female , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged , Prospective Studies , Speech Perception , Treatment Outcome , Young Adult
16.
Neuroimage ; 157: 381-387, 2017 08 15.
Article in English | MEDLINE | ID: mdl-28624645

ABSTRACT

Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions.


Subject(s)
Attention/physiology , Brain Mapping/methods , Frontal Lobe/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Adult , Female , Frontal Lobe/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Young Adult
17.
J Acoust Soc Am ; 141(4): 2933, 2017 04.
Article in English | MEDLINE | ID: mdl-28464618

ABSTRACT

The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa.


Subject(s)
Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Electric Stimulation , Female , Hearing , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Phonetics , Speech Intelligibility
18.
J Acoust Soc Am ; 141(2): 1133, 2017 02.
Article in English | MEDLINE | ID: mdl-28253707

ABSTRACT

Fluctuating noise, common in everyday environments, has the potential to mask acoustic cues important for speech recognition. This study examined the extent to which acoustic cues for perception of vowels and stop consonants differ in their susceptibility to simultaneous and forward masking. Younger normal-hearing, older normal-hearing, and older hearing-impaired adults identified initial and final consonants or vowels in noise-masked syllables that had been spectrally shaped. The amount of shaping was determined by subjects' audiometric thresholds. A second group of younger adults with normal hearing was tested with spectral shaping determined by the mean audiogram of the hearing-impaired group. Stimulus timing ensured that the final 10, 40, or 100 ms of the syllable occurred after the masker offset. Results demonstrated that participants benefited from short temporal delays between the noise and speech for vowel identification, but required longer delays for stop consonant identification. Older adults with normal and impaired hearing, with sufficient audibility, required longer delays to obtain performance equivalent to that of the younger adults. Overall, these results demonstrate that in forward masking conditions, younger listeners can successfully identify vowels during short temporal intervals (i.e., one unmasked pitch period), with longer durations required for consonants and for older adults.


Subject(s)
Aging/psychology , Cues , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Middle Aged , Time Factors , Young Adult
19.
J Acoust Soc Am ; 140(4): 2481, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27794300

ABSTRACT

The detection of a brief, sinusoidal probe in a long broadband, simultaneous masker improves as the probe is delayed from the masker's onset. This improvement ("overshoot") may be mediated by a reduction in cochlear amplifier gain over the timecourse of the masker via the medial olivocochlear (MOC) reflex. Overshoot was measured in younger adults with normal hearing and in older adults with normal and impaired hearing to test the hypothesis that aging and cochlear hearing loss result in abnormal overshoot, consistent with changes in certain structures along the MOC pathway. Overshoot decreased with increasing quiet probe thresholds and was only minimally influenced by increasing age. Marked individual differences in overshoot were observed due to differences in masking thresholds for probes presented near the masker's onset. Model simulations support the interpretation that reduced overshoot in hearing-impaired listeners is due to limited cochlear amplifier gain and therefore less gain to adjust over the timecourse of the masker. Similar overshoot among younger and older adults with normal hearing suggests that age-related changes to mechanisms underlying overshoot do not result in significant differences in overshoot among younger and older adults with normal hearing.


Subject(s)
Hearing Loss , Auditory Threshold , Hearing , Humans , Perceptual Masking
20.
Otol Neurotol ; 37(10): 1475-1481, 2016 12.
Article in English | MEDLINE | ID: mdl-27631832

ABSTRACT

OBJECTIVE: To compare word recognition scores for adults with hearing loss measured using earphones and in the sound field without and with hearing aids (HA). STUDY DESIGN: Independent review of presurgical audiological data from an active middle ear implant (MEI) FDA clinical trial. SETTING: Multicenter prospective FDA clinical trial. PATIENTS: Ninety-four adult HA users. INTERVENTIONS/MAIN OUTCOMES MEASURED: Preoperative earphone, aided word recognition scores, and speech intelligibility index. RESULTS: We performed an independent review of presurgical audiological data from an MEI FDA trial and compared unaided and aided word recognition scores with participants' HAs fit according to the NAL-R algorithm. For 52 participants (55.3%), differences in scores between earphone and aided conditions were >10%; for 33 participants (35.1%), earphone scores were higher by 10% or more than aided scores. These participants had significantly higher pure-tone thresholds at 250, 500, and 1000 Hz, higher pure-tone averages, higher speech recognition thresholds (and higher earphone speech levels [p = 0.002]). No significant correlation was observed between word recognition scores measured with earphones and with hearing aids (r = 0.14; p = 0.16), whereas a moderately high positive correlation was observed between unaided and aided word recognition (r = 0.68; p < 0.001). CONCLUSION: Results of these analyses do not support the common clinical practice of using word recognition scores measured with earphones to predict aided word recognition or hearing aid benefit. Rather, these results provide evidence supporting the measurement of aided word recognition in patients who are considering hearing aids.


Subject(s)
Hearing Loss/physiopathology , Ossicular Prosthesis , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Hearing Loss/rehabilitation , Humans , Male , Middle Aged , Prospective Studies , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...