Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Am J Audiol ; : 1-11, 2023 Nov 08.
Article in English | MEDLINE | ID: mdl-37939343

ABSTRACT

PURPOSE: Standard clinical audiologic assessment is limited in its ability to capture variance in self-reported hearing difficulty. Additionally, the costs associated with clinical testing in audiology create financial barriers for hearing health care in developing countries like Mexico. This study used an open-source Spanish-language tool called PART (Portable Automated Rapid Testing) to test the hypothesis that a battery of assessments of auditory processing can complement standard clinical audiological assessment to better capture the variance of self-reported hearing difficulty. METHOD: Forty-three adults between 40 and 69 years of age were tested in Mexico City using a traditional clinical pure-tone audiogram, cognitive screening, and a battery of PART-based auditory processing assessments including a speech-on-speech competition spatial release from masking task. Results were compared to self-reported hearing difficulty, assessed with a Spanish-language adaptation of the Hearing Handicap Inventory for the Elderly-Screening Version (HHIE-S). RESULTS: Several measures from the PART battery exhibited stronger correlations with self-reported hearing difficulties than the pure-tone audiogram. The spatial release from masking task best captured variance in HHIE-S scores and remained significant after controlling for the effects of age, audibility, and cognitive score. CONCLUSIONS: The spatial release from masking task can complement traditional clinical measures to better account for patient's self-reported hearing difficulty. Open-source access to this test in PART supports its implementation for Spanish speakers in clinical settings around the world at low cost. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.24470140.

2.
J Assoc Res Otolaryngol ; 24(4): 429-439, 2023 08.
Article in English | MEDLINE | ID: mdl-37438572

ABSTRACT

PURPOSE: Speech is characterized by dynamic acoustic cues that must be encoded by the auditory periphery, auditory nerve, and brainstem before they can be represented in the auditory cortex. The fidelity of these cues in the brainstem can be assessed with the frequency-following response (FFR). Data obtained from older adults-with normal or impaired hearing-were compared with previous results obtained from normal-hearing younger adults to evaluate the effects of age and hearing loss on the fidelity of FFRs to tone glides. METHOD: A signal detection approach was used to model a threshold criterion to distinguish the FFR from baseline neural activity. The response strength and temporal coherence of the FFR to tone glides varying in direction (rising or falling) and extent ([Formula: see text], [Formula: see text], or 1 octave) were assessed by signal-to-noise ratio (SNR) and stimulus-response correlation coefficient (SRCC) in older adults with normal hearing and with hearing loss. RESULTS: Significant group mean differences in both SNR and SRCC were noted-with poorer responses more frequently observed with increased age and hearing loss-but with considerable response variability among individuals within each group and substantial overlap among group distributions. CONCLUSION: The overall distribution of FFRs across listeners and stimulus conditions suggests that observed group differences associated with age and hearing loss are influenced by a decreased likelihood of older and hearing-impaired individuals having a detectable FFR response and by lower average FFR fidelity among those older and hearing-impaired individuals who do have a detectable response.


Subject(s)
Deafness , Hearing Loss, Sensorineural , Hearing Loss , Speech Perception , Humans , Aged , Speech Perception/physiology , Acoustic Stimulation/methods , Hearing/physiology
3.
Hear Res ; 434: 108771, 2023 07.
Article in English | MEDLINE | ID: mdl-37119674

ABSTRACT

Difficulty understanding speech in fluctuating backgrounds is common among older adults. Whereas younger adults are adept at interpreting speech based on brief moments when the signal-to-noise ratio is favorable, older adults use these glimpses of speech less effectively. Age-related declines in auditory brainstem function may degrade the fidelity of speech cues in fluctuating noise for older adults, such that brief glimpses of speech interrupted by noise segments are not faithfully represented in the neural code that reaches the cortex. This hypothesis was tested using electrophysiological recordings of the envelope following response (EFR) elicited by glimpses of speech-like stimuli varying in duration (42, 70, 210 ms) and interrupted by silence or intervening noise. Responses from adults aged 23-73 years indicated that both age and hearing sensitivity were associated with EFR temporal coherence and response magnitude. Age was better than hearing sensitivity for predicting temporal coherence, whereas hearing sensitivity was better than age for predicting response magnitude. Poorer-fidelity EFRs were observed with shorter glimpses and with the addition of intervening noise. However, losses of fidelity with glimpse duration and noise were not associated with participant age or hearing sensitivity. These results suggest that the EFR is sensitive to factors commonly associated with glimpsing but do not entirely account for age-related changes in speech recognition in fluctuating backgrounds.


Subject(s)
Speech Perception , Speech , Humans , Aged , Speech Perception/physiology , Noise/adverse effects , Hearing/physiology , Brain Stem , Acoustic Stimulation/methods
4.
Am J Audiol ; 32(1): 210-219, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36763846

ABSTRACT

PURPOSE: Difficulty understanding speech in noise is a common communication problem. Clinical tests of speech in noise differ considerably from real-world listening and offer patients limited intrinsic motivation to perform well. In order to design a test that captures motivational aspects of real-world communication, this study investigated effects of gamification, or the inclusion of game elements, on a laboratory spatial release from masking test. METHOD: Fifty-four younger adults with normal hearing completed a traditional laboratory and a gamified test of spatial release from masking in counterbalanced order. Masker level adapted based on performance, with the traditional test ending after 10 reversals and the gamified test ending when participants solved a visual puzzle. Target-to-masker ratio thresholds (TMRs) with colocated maskers, separated maskers, and estimates of spatial release were calculated after the 10th reversal for both tests and from the last six reversals of the adaptive track from the gamified test. RESULTS: Thresholds calculated from the 10th reversal indicated no significant differences between the traditional and gamified tests. A learning effect was observed with spatially separated maskers, such that TMRs were better for the second test than the first, regardless of test order. Thresholds calculated from the last six reversals of the gamified test indicated better TMRs in the separated condition compared to the traditional test. CONCLUSIONS: Adding gamified elements to a traditional test of spatial release from masking did not negatively affect test validity or estimates of spatial release. Participants were willing to continue playing the gamified test for an average of 30.2 reversals of the adaptive track. For some listeners, performance in the separated condition continued to improve after the 10th reversal, leading to better TMRs and greater spatial release from masking at the end of the gamified test compared to the traditional test. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.22028789.


Subject(s)
Gamification , Speech Perception , Adult , Humans , Perceptual Masking , Auditory Perception , Noise , Hearing Tests
5.
Neurosci Lett ; 788: 136856, 2022 09 25.
Article in English | MEDLINE | ID: mdl-36029915

ABSTRACT

We developed and tested a series of novel and increasingly complex multi-token electrophysiology paradigms for evoking the auditory P3 response. The primary goal was to evaluate the degree to which more complex discrimination tasks and listening environments - which are more likely to engage the types of neural processing used in real-world speech-in-noise situations - could still evoke a robust P3 response. If so, this opens the possibility of such a paradigm making up part of the toolkit for a brain-behavioral approach to improve understanding of speech processing. Fourteen normal-hearing adults were tested using four different auditory paradigms consisting of 5 tokens, 20 tokens, 160 tokens, or 160 tokens with background babble. Stimuli were naturally produced consonant-vowel tokens varying in consonant (/d/, /b/, /g/, /v/, and /ð/; all conditions), vowel (/ɑ/, /u/, /i/, and /ɜr/; 20- and 160-token conditions), and talker (4 female, 4 male; 160-token conditions only). All four conditions evoked robust neural responses, and all peaks had visible differences across conditions. However, the more exogenous auditory evoked potentials (N1 and P2) were primarily affected not by overall complexity but by the presence of background noise specifically, the presence of which was associated with longer latencies and smaller amplitudes. The more endogenous P3 peak, as well as the paradigm behavioral measures, revealed a more graded effect of overall paradigm complexity, rather than the background noise dominating the other factors. Our conclusion was that all four complex auditory paradigms, including the most complex (160 distinct consonant-vowel tokens presented in background babble), are viable means of stimulating N1-P2 and N2b-P3 auditory evoked responses and may therefore be useful in brain-behavioral approaches to understanding speech perception in noise.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Cortex/physiology , Evoked Potentials , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Noise , Speech Perception/physiology
6.
Trends Hear ; 24: 2331216520915110, 2020.
Article in English | MEDLINE | ID: mdl-32372720

ABSTRACT

Focused attention on expected voice features, such as fundamental frequency (F0) and spectral envelope, may facilitate segregation and selection of a target talker in competing talker backgrounds. Age-related declines in attention may limit these abilities in older adults, resulting in poorer speech understanding in complex environments. To test this hypothesis, younger and older adults with normal hearing listened to sentences with a single competing talker. For most trials, listener attention was directed to the target by a cue phrase that matched the target talker's F0 and spectral envelope. For a small percentage of randomly occurring probe trials, the target's voice unexpectedly differed from the cue phrase in terms of F0 and spectral envelope. Overall, keyword recognition for the target talker was poorer for older adults than younger adults. Keyword recognition was poorer on probe trials than standard trials for both groups, and incorrect responses on probe trials contained keywords from the single-talker masker. No interaction was observed between age-group and the decline in keyword recognition on probe trials. Thus, reduced performance by older adults overall could not be attributed to declines in attention to an expected voice. Rather, other cognitive abilities, such as speed of processing and linguistic closure, were predictive of keyword recognition for younger and older adults. Moreover, the effects of age interacted with the sex of the target talker, such that older adults had greater difficulty understanding target keywords from female talkers than male talkers.


Subject(s)
Motivation , Speech Perception , Aged , Auditory Perception , Female , Hearing Tests , Humans , Male , Recognition, Psychology
7.
J Acoust Soc Am ; 145(3): EL173, 2019 03.
Article in English | MEDLINE | ID: mdl-31067962

ABSTRACT

Envelope and periodicity cues may provide redundant, additive, or synergistic benefits to speech recognition. The contributions of these cues may change under different listening conditions and may differ for younger and older adults. To address these questions, younger and older adults with normal hearing listened to interrupted sentences containing different combinations of envelope and periodicity cues in quiet and with a competing talker. Envelope and periodicity cues improved speech recognition for both groups, and their benefits were additive when both cues were available. Envelope cues were particularly important for older adults and for sentences with a competing talker.


Subject(s)
Aging/physiology , Cues , Speech Perception , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Periodicity , Signal-To-Noise Ratio
8.
Hear Res ; 375: 25-33, 2019 04.
Article in English | MEDLINE | ID: mdl-30772133

ABSTRACT

The spectral (frequency) and amplitude cues in speech change rapidly over time. Study of the neural encoding of these dynamic features may help to improve diagnosis and treatment of speech-perception difficulties. This study uses tone glides as a simple approximation of dynamic speech sounds to better our understanding of the underlying neural representation of speech. The frequency following response (FFR) was recorded from 10 young normal-hearing adults using six signals varying in glide direction (rising and falling) and extent of frequency change (13, 23, and 1 octave). In addition, the FFR was simultaneously recorded using two different electrode montages (vertical and horizontal). These factors were analyzed across three time windows using a measure of response strength (signal-to-noise ratio) and a measure of temporal coherence (stimulus-to-response correlation coefficient). Results demonstrated effects of extent, montage, and a montage-by-window interaction. SNR and stimulus-to-response correlation measures differed in their sensitivity to these factors. These results suggest that the FFR reflects dynamic acoustic characteristics of simple tonal stimuli very well. Additional research is needed to determine how neural encoding may differ for more natural dynamic speech signals and populations with impaired auditory processing.


Subject(s)
Acoustic Stimulation/methods , Speech Perception/physiology , Adult , Electrodes , Electroencephalography/instrumentation , Electroencephalography/statistics & numerical data , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Phonetics , Psychoacoustics , Signal-To-Noise Ratio , Young Adult
9.
J Acoust Soc Am ; 144(1): 267, 2018 07.
Article in English | MEDLINE | ID: mdl-30075693

ABSTRACT

In realistic listening environments, speech perception requires grouping together audible fragments of speech, filling in missing information, and segregating the glimpsed target from the background. The purpose of this study was to determine the extent to which age-related difficulties with these tasks can be explained by declines in glimpsing, phonemic restoration, and/or speech segregation. Younger and older adults with normal hearing listened to sentences interrupted with silence or envelope-modulated noise, presented either in quiet or with a competing talker. Older adults were poorer than younger adults at recognizing keywords based on short glimpses but benefited more when envelope-modulated noise filled silent intervals. Recognition declined with a competing talker but this effect did not interact with age. Results of cognitive tasks indicated that faster processing speed and better visual-linguistic closure were predictive of better speech understanding. Taken together, these results suggest that age-related declines in speech recognition may be partially explained by difficulty grouping short glimpses of speech into a coherent message.


Subject(s)
Age Factors , Hearing/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Aged , Aged, 80 and over , Auditory Perception/physiology , Cognition/physiology , Comprehension/physiology , Female , Hearing Tests , Humans , Male , Middle Aged , Noise , Perceptual Masking/physiology
10.
J Acoust Soc Am ; 141(2): 1133, 2017 02.
Article in English | MEDLINE | ID: mdl-28253707

ABSTRACT

Fluctuating noise, common in everyday environments, has the potential to mask acoustic cues important for speech recognition. This study examined the extent to which acoustic cues for perception of vowels and stop consonants differ in their susceptibility to simultaneous and forward masking. Younger normal-hearing, older normal-hearing, and older hearing-impaired adults identified initial and final consonants or vowels in noise-masked syllables that had been spectrally shaped. The amount of shaping was determined by subjects' audiometric thresholds. A second group of younger adults with normal hearing was tested with spectral shaping determined by the mean audiogram of the hearing-impaired group. Stimulus timing ensured that the final 10, 40, or 100 ms of the syllable occurred after the masker offset. Results demonstrated that participants benefited from short temporal delays between the noise and speech for vowel identification, but required longer delays for stop consonant identification. Older adults with normal and impaired hearing, with sufficient audibility, required longer delays to obtain performance equivalent to that of the younger adults. Overall, these results demonstrate that in forward masking conditions, younger listeners can successfully identify vowels during short temporal intervals (i.e., one unmasked pitch period), with longer durations required for consonants and for older adults.


Subject(s)
Aging/psychology , Cues , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Middle Aged , Time Factors , Young Adult
11.
J Speech Lang Hear Res ; 59(5): 1198-1207, 2016 10 01.
Article in English | MEDLINE | ID: mdl-27603264

ABSTRACT

Purpose: This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. Method: Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise amplitude modulated by a competing talker. Sentences were spectrally shaped according to individual audiograms or to the mean audiogram from the listeners with hearing impairment for a younger spectrally shaped control group. The modulation spectrum of the noise was low-pass filtered at different modulation cutoff frequencies. The effect of noise level was also examined. Results: Performance declined when nonsimultaneous masker modulation included faster rates and was maximized when masker modulation matched the preserved primary speech modulation. Vowels resulted in better performance compared with consonants at slower modulation cutoff rates, likely due to suprasegmental features. Poorer overall performance was observed with increased age or hearing loss, and for listeners who received spectrally shaped speech. Conclusions: Nonsimultaneous amplitude modulations from a competing talker significantly interacted with the preserved speech segment, and additional listener factors were observed for age and hearing loss. Importantly, listeners may obtain benefit from nonsimultaneous competing modulations when they match the preserved modulations of the sentence.


Subject(s)
Aging/psychology , Hearing Loss/psychology , Noise , Perceptual Masking , Speech Perception , Acoustic Stimulation/methods , Adolescent , Aged , Aged, 80 and over , Analysis of Variance , Hearing Tests , Humans , Middle Aged , Young Adult
12.
J Acoust Soc Am ; 137(6): 3487-501, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26093436

ABSTRACT

This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions.


Subject(s)
Aging/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Presbycusis/psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Voice Quality , Acoustic Stimulation , Acoustics , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Cues , Humans , Middle Aged , Presbycusis/diagnosis , Signal Processing, Computer-Assisted , Sound Spectrography , Time Factors , Young Adult
13.
J Acoust Soc Am ; 134(4): EL352-8, 2013 Oct.
Article in English | MEDLINE | ID: mdl-24116542

ABSTRACT

Perceived listening effort was assessed for a monaural irregular-rhythm detection task while competing signals were presented to the contralateral ear. When speech was the competing signal, listeners reported greater listening effort compared to either contralateral steady-state noise or no competing signal. Behavioral thresholds for irregular-rhythm detection were unaffected by competing speech, indicating that listeners compensated for this competing signal with effortful listening. These results suggest that perceived listening effort may be associated with suppression of task-irrelevant information, even for conditions where informational masking and competition for linguistic processing resources would not be expected.


Subject(s)
Noise/adverse effects , Perceptual Masking , Pitch Perception , Speech Perception , Speech , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold , Female , Humans , Male , Pattern Recognition, Physiological , Signal Detection, Psychological , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...