Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Front Psychol ; 13: 935475, 2022.
Article in English | MEDLINE | ID: mdl-35992450

ABSTRACT

Word in noise identification is facilitated by acoustic differences between target and competing sounds and temporal separation between the onset of the masker and that of the target. Younger and older adults are able to take advantage of onset delay when the masker is dissimilar (Noise) to the target word, but only younger adults are able to do so when the masker is similar (Babble). We examined the neural underpinning of this age difference using cortical evoked responses to words masked by either Babble or Noise when the masker preceded the target word by 100 or 600 ms in younger and older adults, after adjusting the signal-to-noise ratios (SNRs) to equate behavioural performance across age groups and conditions. For the 100 ms onset delay, the word in noise elicited an acoustic change complex (ACC) response that was comparable in younger and older adults. For the 600 ms onset delay, the ACC was modulated by both masker type and age. In older adults, the ACC to a word in babble was not affected by the increase in onset delay whereas younger adults showed a benefit from longer delays. Hence, the age difference in sensitivity to temporal delay is indexed by early activity in the auditory cortex. These results are consistent with the hypothesis that an increase in onset delay improves stream segregation in younger adults in both noise and babble, but only in noise for older adults and that this change in stream segregation is evident in early cortical processes.

2.
Front Psychol ; 13: 838576, 2022.
Article in English | MEDLINE | ID: mdl-35369266

ABSTRACT

One aspect of auditory scenes that has received very little attention is the level of diffuseness of sound sources. This aspect has increasing importance due to growing use of amplification systems. When an auditory stimulus is amplified and presented over multiple, spatially-separated loudspeakers, the signal's timbre is altered due to comb filtering. In a previous study we examined how increasing the diffuseness of the sound sources might affect listeners' ability to recognize speech presented in different types of background noise. Listeners performed similarly when both the target and the masker were presented via a similar number of loudspeakers. However, performance improved when the target was presented using a single speaker (compact) and the masker from three spatially separate speakers (diffuse) but worsened when the target was diffuse, and the masker was compact. In the current study, we extended our research to examine whether the effects of timbre changes with age and linguistic experience. Twenty-four older adults whose first language was English (Old-EFLs) and 24 younger adults whose second language was English (Young-ESLs) were asked to repeat non-sense sentences masked by either Noise, Babble, or Speech and their results were compared with those of the Young-EFLs previously tested. Participants were divided into two experimental groups: (1) A Compact-Target group where the target sentences were presented over a single loudspeaker, while the masker was either presented over three loudspeakers or over a single loudspeaker; (2) A Diffuse-Target group, where the target sentences were diffuse while the masker was either compact or diffuse. The results indicate that the Target Timbre has a negligible effect on thresholds when the timbre of the target matches the timbre of the masker in all three groups. When there is a timbre contrast between target and masker, thresholds are significantly lower when the target is compact than when it is diffuse for all three listening groups in a Noise background. However, while this difference is maintained for the Young and Old-EFLs when the masker is Babble or Speech, speech reception thresholds in the Young-ESL group tend to be equivalent for all four combinations of target and masker timbre.

3.
Atten Percept Psychophys ; 82(3): 1443-1458, 2020 Jun.
Article in English | MEDLINE | ID: mdl-31410762

ABSTRACT

When amplification is used, sound sources are often presented over multiple loudspeakers, which can alter their timbre, and introduce comb-filtering effects. Increasing the diffuseness of a sound by presenting it over spatially separated loudspeakers might affect the listeners' ability to form a coherent auditory image of it, alter its perceived spatial position, and may even affect the extent to which it competes for the listener's attention. In addition, it can lead to comb-filtering effects that can alter the spectral profiles of sounds arriving at the ears. It is important to understand how these changes affect speech perception. In this study, young adults were asked to repeat nonsense sentences presented in either noise, babble, or speech. Participants were divided into two groups: (1) A Compact-Target Timbre group where the target sentences were presented over a single loudspeaker (compact target), while the masker was either presented over three loudspeakers (diffuse) or over a single loudspeaker (compact); (2) A Diffuse-Target Timbre group, where the target sentences were diffuse while the masker was either compact or diffuse. Timbre had no significant effect in the absence of a timbre contrast between target and masker. However, when there was a timbre contrast, the signal-to-noise ratios needed for 50% correct recognition of the target speech were higher (worse) when the masker was compact, and lower (better) when the target was compact. These results were consistent with the expected effects from comb filtering, and could also reflect a tendency for attention to be drawn towards compact sound sources.


Subject(s)
Speech Perception , Humans , Noise , Perceptual Masking , Speech
4.
J Speech Lang Hear Res ; 63(1): 345-356, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31851858

ABSTRACT

Purpose This study tested the effects of background speech babble on novel word learning in preschool children with a multisession paradigm. Method Eight 3-year-old children were exposed to a total of 8 novel word-object pairs across 2 story books presented digitally. Each story contained 4 novel consonant-vowel-consonant nonwords. Children were exposed to both stories, one in quiet and one in the presence of 4-talker babble presented at 0-dB signal-to-noise ratio. After each story, children's learning was tested with a referent selection task and a verbal recall (naming) task. Children were exposed to and tested on the novel word-object pairs on 5 separate days within a 2-week span. Results A significant main effect of session was found for both referent selection and verbal recall. There was also a significant main effect of exposure condition on referent selection performance, with more referents correctly selected for word-object pairs that were presented in quiet compared to pairs presented in speech babble. Finally, children's verbal recall of novel words was statistically better than baseline performance (i.e., 0%) on Sessions 3-5 for words exposed in quiet, but only on Session 5 for words exposed in speech babble. Conclusions These findings suggest that background speech babble at 0-dB signal-to-noise ratio disrupts novel word learning in preschool-age children. As a result, children may need more time and more exposures of a novel word before they can recognize or verbally recall it.


Subject(s)
Perceptual Masking , Phonetics , Signal-To-Noise Ratio , Speech Perception , Verbal Learning , Child, Preschool , Female , Humans , Language Tests , Male , Mental Recall
5.
Int J Audiol ; 59(3): 195-207, 2020 03.
Article in English | MEDLINE | ID: mdl-31663391

ABSTRACT

Objective: Understanding communication difficulties related to tinnitus, by identifying tinnitus-related differences in the perception of spoken emotions, focussing on the roles of semantics (words), prosody (tone of speech) and their interaction.Study sample and design: Twenty-two people-with-tinnitus (PwT) and 24 people-without-tinnitus (PnT) listened to spoken sentences made of different combinations of four discrete emotions (anger, happiness, sadness, neutral) presented in the prosody and semantics (Test for Rating Emotions in Speech). In separate blocks, listeners were asked to attend to the sentence as a whole, integrating both speech channels (gauging integration), or to focus on one channel only (gauging identification and selective attention). Their task was to rate how much they agree the sentence conveys each of the predefined emotions.Results: Both groups identified emotions similarly, and performed with similar failures of selective attention. Group differences were found in the integration of channels. PnT showed a bias towards prosody, whereas PwT weighed both channels equally.Conclusions: Tinnitus appears to impact the integration of the prosodic and semantic channels. Three possible sources are suggested: (a) sensory: tinnitus may reduce prosodic cues. (b) Cognitive: tinnitus-related reduction in cognitive processing.


Subject(s)
Emotions , Semantics , Speech Perception , Tinnitus/psychology , Adult , Attention , Comprehension , Cues , Female , Humans , Language , Male , Middle Aged , Speech , Task Performance and Analysis
6.
Atten Percept Psychophys ; 80(1): 242-261, 2018 Jan.
Article in English | MEDLINE | ID: mdl-29039045

ABSTRACT

We examined how the type of masker presented in the background affected the extent to which visual information enhanced speech recognition, and whether the effect was dependent on or independent of age and linguistic competence. In the present study, young speakers of English as a first language (YEL1) and English as a second language (YEL2), as well as older speakers of English as a first language (OEL1), were asked to complete an audio (A) and an audiovisual (AV) speech recognition task in which they listened to anomalous target sentences presented against a background of one of three masker types (noise, babble, and competing speech). All three main effects were found to be statistically significant (group, masker type, A vs. AV presentation type). Interesting two-way interactions were found between masker type and group and between masker type and presentation type; however, no interactions were found between group (age and/or linguistic competence) and presentation type (A vs. AV). The results of this study, while they shed light on the effect of masker type on the AV advantage, suggest that age and linguistic competence have no significant effects on the extent to which a listener is able to use visual information to improve speech recognition in background noise.


Subject(s)
Acoustic Stimulation/methods , Linguistics , Perceptual Masking/physiology , Photic Stimulation/methods , Speech Perception/physiology , Age Factors , Aged , Audiovisual Aids , Female , Humans , Language , Male , Noise , Young Adult
7.
Hear Res ; 341: 9-18, 2016 11.
Article in English | MEDLINE | ID: mdl-27496539

ABSTRACT

Background noise has a greater adverse effect on word recognition when people are listening in their second language (L2) as opposed to their first language (L1). The present study investigates the extent to which linguistic experience affects the ability of L2 listeners to benefit from a delay between the onset of a masker and the onset of a word. In a previous study (Ben-David, Tse & Schneider, 2012), word recognition thresholds for young L1s were found to improve with the increase in the delay between the onset of a masker (either a stationary noise or a babble of voices) and the onset of a word. The investigators interpreted this result as reflecting the ability of L1 listeners to rapidly segregate the target words from a masker. Given stream segregation depends, in part, on top-down knowledge-driven processes, we might expect stream segregation to be more "sluggish" for L2 listeners than for L1 listeners, especially when the masker consists of a babble of L2 voices. In the present study, we compared the ability of native English speakers to those who had either recent or long-term immersion in English as L2, to benefit from a delay between masker onset and word onset for English words. Results show that thresholds were higher for the two L2s groups than for the L1s. However, the rate at which word recognition improved with word-onset delay was unaffected by linguistic status, both when words were presented in noise, and in babble. Hence, for young listeners, stream segregation appears to be independent of linguistic status, suggesting that bottom-up sensory mechanisms play a large role in stream segregation in this paradigm. The implications of a failure of older L1 listeners (in Ben-David et al.) to benefit from a word-onset delay when the masker is a babble of voices are discussed.


Subject(s)
Language , Perceptual Masking , Speech Intelligibility , Speech Perception , Adolescent , Adult , Age Factors , Aged , Auditory Perception , Auditory Threshold , Female , Humans , Linguistics , Male , Noise , Psychometrics , Reproducibility of Results , Speech , Speech Reception Threshold Test , Time Factors , Young Adult
8.
Front Psychol ; 7: 618, 2016.
Article in English | MEDLINE | ID: mdl-27242569

ABSTRACT

The short-term memory performance of a group of younger adults, for whom English was a second language (young EL2 listeners), was compared to that of younger and older adults for whom English was their first language (EL1 listeners). To-be-remembered words were presented in noise and in quiet. When presented in noise, the listening situation was adjusted to ensure that the likelihood of recognizing the individual words was comparable for all groups. Previous studies which used the same paradigm found memory performance of older EL1 adults on this paired-associate task to be poorer than that of their younger EL1 counterparts both in quiet and in a background of babble. The purpose of the present study was to investigate whether the less well-established semantic and linguistic skills of EL2 listeners would also lead to memory deficits even after equating for word recognition as was done for the younger and older EL1 listeners. No significant differences in memory performance were found between young EL1 and EL2 listeners after equating for word recognition, indicating that the EL2 listeners' poorer semantic and linguistic skills had little effect on their ability to memorize and recall paired associates. This result is consistent with the hypothesis that age-related declines in memory are primarily due to age-related declines in higher-order processes supporting stream segregation and episodic memory. Such declines are likely to increase the load on higher-order (possibly limited) cognitive processes supporting memory. The problems that these results pose for the comprehension of spoken language in these three groups are discussed.

9.
Hear Res ; 331: 119-30, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26560239

ABSTRACT

To recognize speech in a noisy auditory scene, listeners need to perceptually segregate the target talker's voice from other competing sounds (stream segregation). A number of studies have suggested that the attentional demands placed on listeners increase as the acoustic properties and informational content of the competing sounds become more similar to that of the target voice. Hence we would expect attentional demands to be considerably greater when speech is masked by speech than when it is masked by steady-state noise. To investigate the role of attentional mechanisms in the unmasking of speech sounds, event-related potentials (ERPs) were recorded to a syllable masked by noise or competing speech under both active (the participant was asked to respond when the syllable was presented) or passive (no response was required) listening conditions. The results showed that the long-latency auditory response to a syllable (/bi/), presented at different signal-to-masker ratios (SMRs), was similar in both passive and active listening conditions, when the masker was a steady-state noise. In contrast, a switch from the passive listening condition to the active one, when the masker was two-talker speech, significantly enhanced the ERPs to the syllable. These results support the hypothesis that the need to engage attentional mechanisms in aid of scene analysis increases as the similarity (both acoustic and informational) between the target speech and the competing background sounds increases.


Subject(s)
Attention , Auditory Perception , Evoked Potentials , Speech Perception , Speech/physiology , Acoustic Stimulation , Adult , Algorithms , Female , Humans , Language , Male , Noise , Perceptual Masking , Phonetics , Psychoacoustics , Young Adult
10.
Exp Aging Res ; 42(1): 31-49, 2016.
Article in English | MEDLINE | ID: mdl-26683040

ABSTRACT

BACKGROUND/STUDY CONTEXT: Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. METHODS: The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. RESULTS: Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. CONCLUSION: The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to facilitate lexical access, making it difficult for them to fully engage higher-order cognitive abilities in support of listening comprehension.


Subject(s)
Comprehension , Hearing , Language , Adult , Age Factors , Aged , Aged, 80 and over , Female , Humans , Male , Young Adult
11.
J Speech Lang Hear Res ; 58(5): 1570-91, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26161679

ABSTRACT

PURPOSE: We investigated how age and linguistic status affected listeners' ability to follow and comprehend 3-talker conversations, and the extent to which individual differences in language proficiency predict speech comprehension under difficult listening conditions. METHOD: Younger and older L1s as well as young L2s listened to 3-talker conversations, with or without spatial separation between talkers, in either quiet or against moderate or high 12-talker babble background, and were asked to answer questions regarding their contents. RESULTS: After compensating for individual differences in speech recognition, no significant differences in conversation comprehension were found among the groups. As expected, conversation comprehension decreased as babble level increased. Individual differences in reading comprehension skill contributed positively to performance in younger EL1s and in young EL2s to a lesser degree but not in older EL1s. Vocabulary knowledge was significantly and positively related to performance only at the intermediate babble level. CONCLUSION: The results indicate that the manner in which spoken language comprehension is achieved is modulated by the listeners' age and linguistic status.


Subject(s)
Comprehension/physiology , Hearing/physiology , Perceptual Masking/physiology , Vocabulary , Age Factors , Aged , Audiometry, Pure-Tone , Auditory Threshold , Humans , Language Tests , Speech Perception/physiology , Young Adult
12.
Front Psychol ; 6: 474, 2015.
Article in English | MEDLINE | ID: mdl-25954230

ABSTRACT

A number of statistical textbooks recommend using an analysis of covariance (ANCOVA) to control for the effects of extraneous factors that might influence the dependent measure of interest. However, it is not generally recognized that serious problems of interpretation can arise when the design contains comparisons of participants sampled from different populations (classification designs). Designs that include a comparison of younger and older adults, or a comparison of musicians and non-musicians are examples of classification designs. In such cases, estimates of differences among groups can be contaminated by differences in the covariate population means across groups. A second problem of interpretation will arise if the experimenter fails to center the covariate measures (subtracting the mean covariate score from each covariate score) whenever the design contains within-subject factors. Unless the covariate measures on the participants are centered, estimates of within-subject factors are distorted, and significant increases in Type I error rates, and/or losses in power can occur when evaluating the effects of within-subject factors. This paper: (1) alerts potential users of ANCOVA of the need to center the covariate measures when the design contains within-subject factors, and (2) indicates how they can avoid biases when one cannot assume that the expected value of the covariate measure is the same for all of the groups in a classification design.

13.
Front Syst Neurosci ; 8: 21, 2014.
Article in English | MEDLINE | ID: mdl-24578684

ABSTRACT

Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.

14.
Am J Audiol ; 22(2): 343-6, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24018576

ABSTRACT

PURPOSE: IN this article, the authors aimed to measure the course of improvement in a gap-detection (GD) task following multisession training in older compared with young adults. METHOD: Participants with normal hearing (N= 30) were divided into 4 groups: 2 groups of older and young adults who received multisession training over 10 days (9 adults/group, Mage = 64.7 years and 24.1 years, respectively) and 2 control groups of older and young adults (6 adults/group, Mage = 65.4 years and 26.3 years, respectively). Stimuli consisted of silent gaps marked by 2 noise bands centered at 1000 Hz. GD thresholds (GDTs) were measured through use of an adaptive procedure in each testing day, 24 hr post-training and at 1 month post-training. RESULTS: Initial GDTs of the older group were significantly poorer than those of the young adults. However, by the fourth training day, the mean GDTs of the 2 groups were similar, and both groups showed the same rate of improvement in the following sessions. Data of the controls confirmed that the better GDTs of the trained groups resulted from their training. Retention of learning was demonstrated for both age groups. CONCLUSIONS: The data from this study support the notion that some aspects of auditory learning and temporal resolution may be preserved in the elderly.


Subject(s)
Auditory Perception/physiology , Correction of Hearing Impairment/methods , Discrimination, Psychological/physiology , Neuronal Plasticity/physiology , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Female , Healthy Volunteers , Humans , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...