Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Audiol ; 62(11): 1067-1075, 2023 11.
Article in English | MEDLINE | ID: mdl-36285707

ABSTRACT

OBJECTIVE: Working memory refers to a cognitive system that holds a limited amount of information in a temporarily heightened state of availability, for use in ongoing cognitive tasks. Research suggests a link between working memory and speech recognition. In this study, we investigated this relationship using two working memory tests that differed in regard to the operationalisation of the link between working memory and attention: the auditory visual divided attention test (AVDAT) and the widely used reading span test. DESIGN: The relationship between speech-in-noise recognition and working memory was examined for two different working memory tests that varied in methodological and theoretical aspects, using a within-subject design. STUDY SAMPLE: Nineteen hearing-impaired older listeners participated. RESULTS: We found a strong link between the reading span test and speech-in-noise recognition and a less robust link between the AVDAT and speech-in-noise recognition. There was evidence for the role of selective attention in speech-in-noise recognition, shown via the new AVDAT measure. CONCLUSION: Our findings suggest that the strength of the relationship between speech-in-noise recognition and working memory may be influenced by the match between the demands and the stimuli of the speech-in-noise task and those of the working memory test.


Subject(s)
Hearing Loss , Speech Perception , Humans , Memory, Short-Term , Speech , Noise/adverse effects , Hearing Loss/diagnosis , Hearing
2.
Int J Audiol ; 60(2): 140-150, 2021 02.
Article in English | MEDLINE | ID: mdl-32972283

ABSTRACT

OBJECTIVE: The goal of this study was to assess recognition of foreign-accented speech of varying intelligibility and linguistic complexity in older adults. It is important to understand the factors that influence the recognition of this commonly encountered type of speech, in a population that remains understudied in this regard. DESIGN: A repeated measures design was used. Listeners repeated back linguistically simple and complex sentences heard in noise. The sentences were produced by three talkers of varying intelligibility: one native American English, one foreign-accented talker of high intelligibility and one foreign-accented talker of low intelligibility. Percentage word recognition in sentences was measured. STUDY SAMPLE: Twenty-five older listeners with a range of hearing thresholds participated. RESULTS: We found a robust interaction between talker intelligibility and linguistic complexity. Recognition accuracy was higher for simple versus complex sentences, but only for the native and high intelligibility foreign-accented talkers. This pattern was present after effects of working memory capacity and hearing acuity were taken into consideration. CONCLUSION: Older listeners exhibit qualitatively different speech processing strategies for low versus high intelligibility foreign-accented talkers. Differences in recognition accuracy for words presented in simple versus in complex sentence contexts only emerged for speech over a threshold of intelligibility.


Subject(s)
Hearing Loss , Speech Perception , Aged , Humans , Linguistics , Noise/adverse effects , Speech , Speech Intelligibility
3.
J Acoust Soc Am ; 147(6): 3765, 2020 06.
Article in English | MEDLINE | ID: mdl-32611135

ABSTRACT

Foreign-accented speech recognition is typically tested with linguistically simple materials, which offer a limited window into realistic speech processing. The present study examined the relationship between linguistic structure and talker intelligibility in several sentence-in-noise recognition experiments. Listeners transcribed simple/short and more complex/longer sentences embedded in noise. The sentences were spoken by three talkers of varying intelligibility: one native, one high-, and one low-intelligibility non-native English speakers. The effect of linguistic structure on sentence recognition accuracy was modulated by talker intelligibility. Accuracy was disadvantaged by increasing complexity only for the native and high intelligibility foreign-accented talkers, whereas no such effect was found for the low intelligibility foreign-accented talker. This pattern emerged across conditions: low and high signal-to-noise ratios, mixed and blocked stimulus presentation, and in the absence of a major cue to prosodic structure, the natural pitch contour of the sentences. Moreover, the pattern generalized to a different set of three talkers that matched the intelligibility of the original talkers. Taken together, the results in this study suggest that listeners employ qualitatively different speech processing strategies for low- versus high-intelligibility foreign-accented talkers, with sentence-related linguistic factors only emerging for speech over a threshold of intelligibility. Findings are discussed in the context of alternative accounts.


Subject(s)
Speech Perception , Speech , Linguistics , Noise/adverse effects , Recognition, Psychology , Speech Intelligibility
4.
Ear Hear ; 40(6): 1280-1292, 2019.
Article in English | MEDLINE | ID: mdl-30998547

ABSTRACT

OBJECTIVES: Previous work has suggested that individual characteristics, including amount of hearing loss, age, and working memory ability, may affect response to hearing aid signal processing. The present study aims to extend work using metrics to quantify cumulative signal modifications under simulated conditions to real hearing aids worn in everyday listening environments. Specifically, the goal was to determine whether individual factors such as working memory, age, and degree of hearing loss play a role in explaining how listeners respond to signal modifications caused by signal processing in real hearing aids, worn in the listener's everyday environment, over a period of time. DESIGN: Participants were older adults (age range 54-90 years) with symmetrical mild-to-moderate sensorineural hearing loss. We contrasted two distinct hearing aid fittings: one designated as mild signal processing and one as strong signal processing. Forty-nine older adults were enrolled in the study and 35 participants had valid outcome data for both hearing aid fittings. The difference between the two settings related to the wide dynamic range compression and frequency compression features. Order of fittings was randomly assigned for each participant. Each fitting was worn in the listener's everyday environments for approximately 5 weeks before outcome measurements. The trial was double blind, with neither the participant nor the tester aware of the specific fitting at the time of the outcome testing. Baseline measures included a full audiometric evaluation as well as working memory and spectral and temporal resolution. The outcome was aided speech recognition in noise. RESULTS: The two hearing aid fittings resulted in different amounts of signal modification, with significantly less modification for the mild signal processing fitting. The effect of signal processing on speech intelligibility depended on an individual's age, working memory capacity, and degree of hearing loss. Speech recognition with the strong signal processing decreased with increasing age. Working memory interacted with signal processing, with individuals with lower working memory demonstrating low speech intelligibility in noise with both processing conditions, and individuals with higher working memory demonstrating better speech intelligibility in noise with the mild signal processing fitting. Amount of hearing loss interacted with signal processing, but the effects were small. Individual spectral and temporal resolution did not contribute significantly to the variance in the speech intelligibility score. CONCLUSIONS: When the consequences of a specific set of hearing aid signal processing characteristics were quantified in terms of overall signal modification, there was a relationship between participant characteristics and recognition of speech at different levels of signal modification. Because the hearing aid fittings used were constrained to specific fitting parameters that represent the extremes of the signal modification that might occur in clinical fittings, future work should focus on similar relationships with more diverse types of signal processing parameters.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Signal Processing, Computer-Assisted , Age Factors , Aged , Aged, 80 and over , Cross-Over Studies , Data Compression , Double-Blind Method , Female , Humans , Male , Memory, Short-Term , Middle Aged , Noise , Severity of Illness Index , Speech Perception
5.
Atten Percept Psychophys ; 80(1): 222-241, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28975549

ABSTRACT

Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.


Subject(s)
Sound , Speech Perception/physiology , Verbal Behavior/physiology , Voice , Adolescent , Adult , Female , Humans , Male , Memory , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...