Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
Add more filters










Publication year range
1.
Q J Exp Psychol (Hove) ; : 17470218241242260, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38485525

ABSTRACT

Knowledge of the underlying mechanisms of effortful listening could help to reduce cases of social withdrawal and mitigate fatigue, especially in older adults. However, the relationship between transient effort and longer term fatigue is likely to be more complex than originally thought. Here, we manipulated the presence/absence of monetary reward to examine the role of motivation and mood state in governing changes in perceived effort and fatigue from listening. In an online study, 185 participants were randomly assigned to either a "reward" (n = 91) or "no-reward" (n = 94) group and completed a dichotic listening task along with a series of questionnaires assessing changes over time in perceived effort, mood, and fatigue. Effort ratings were higher overall in the reward group, yet fatigue ratings in that group showed a shallower linear increase over time. Mediation analysis revealed an indirect effect of reward on fatigue ratings via perceived mood state; reward induced a more positive mood state which was associated with reduced fatigue. These results suggest that: (1) listening conditions rated as more "effortful" may be less fatiguing if the effort is deemed worthwhile, and (2) alterations to one's mood state represent a potential mechanism by which fatigue may be elicited during unrewarding listening situations.

2.
J Speech Lang Hear Res ; 66(2): 444-460, 2023 02 13.
Article in English | MEDLINE | ID: mdl-36657070

ABSTRACT

PURPOSE: Listening-related fatigue is a potential negative consequence of challenges experienced during everyday listening and may disproportionately affect older adults. Contrary to expectation, we recently found that increased reports of listening-related fatigue were associated with better performance on a dichotic listening task. However, this link was found only in individuals who reported heightened sensitivity to a variety of physical, social, and emotional stimuli (i.e., increased "sensory-processing sensitivity" [SPS]). This study examined whether perceived effort may underlie the link between performance and fatigue. METHOD: Two hundred six young adults, aged 18-30 years (Experiment 1), and 122 older adults, aged 60-80 years (Experiment 2), performed a dichotic listening task and were administered a series of questionnaires including the NASA Task Load Index of perceived effort, the Vanderbilt Fatigue Scale (measuring daily life listening-related fatigue), and the Highly Sensitive Person Scale (measuring SPS). Both experiments were completed online. RESULTS: SPS predicted listening-related fatigue, but perceived effort during the listening task was not associated with SPS or listening-related fatigue in either age group. We were also unable to replicate the interaction between dichotic listening performance and SPS in either group. Exploratory analyses revealed contrasting effects of age; older adults found the dichotic listening task more effortful but indicated lower overall fatigue. CONCLUSIONS: These findings suggest that SPS is a better predictor of listening-related fatigue than performance or effort ratings on a dichotic listening task. SPS may be an important factor in determining an individual's likelihood of experiencing listening-related fatigue irrespective of hearing or cognitive ability. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21893013.


Subject(s)
Auditory Perception , Speech Perception , Aged , Humans , Young Adult , Fatigue , Hearing , Hearing Tests , Surveys and Questionnaires , Adolescent , Adult , Aged, 80 and over
3.
J Acoust Soc Am ; 152(2): 954, 2022 08.
Article in English | MEDLINE | ID: mdl-36050191

ABSTRACT

Recognizing speech in a noisy background is harder when the background is time-forward than for time-reversed speech, a masker direction effect, and harder when the masker is in a known rather than an unknown language, indicating linguistic interference. We examined the masker direction effect when the masker was a known vs unknown language and calculated performance over 50 trials to assess differential masker adaptation. In experiment 1, native English listeners transcribing English sentences showed a larger masker direction effect with English than Mandarin maskers. In experiment 2, Mandarin non-native speakers of English transcribing Mandarin sentences showed a mirror pattern. Both experiments thus support the target-masker linguistic similarity hypothesis, where interference is maximal when target and masker languages are the same. In experiment 3, Mandarin non-native speakers of English transcribing English sentences showed comparable results for English and Mandarin maskers. Non-native listening is therefore consistent with the known-language interference hypothesis, where interference is maximal when the masker language is known to the listener, whether or not it matches the target language. A trial-by-trial analysis showed that the masker direction effect increased over time during native listening but not during non-native listening. The results indicate different target-to-masker streaming strategies during native and non-native speech-in-speech listening.


Subject(s)
Speech Perception , Speech , Language , Perceptual Masking , Phonetics
4.
Psychol Sci ; 32(12): 1937-1951, 2021 12.
Article in English | MEDLINE | ID: mdl-34751602

ABSTRACT

Listening-related fatigue is a potentially serious negative consequence of an aging auditory and cognitive system. However, the impact of age on listening-related fatigue and the factors underpinning any such effect remain unexplored. Using data from a large sample of adults (N = 281), we conducted a conditional process analysis to examine potential mediators and moderators of age-related changes in listening-related fatigue. Mediation analyses revealed opposing effects of age on listening-related fatigue: Older adults with greater perceived hearing impairment tended to report increased listening-related fatigue. However, aging was otherwise associated with decreased listening-related fatigue via reductions in both mood disturbance and sensory-processing sensitivity. Results suggested that the effect of auditory attention ability on listening-related fatigue was moderated by sensory-processing sensitivity; for individuals with high sensory-processing sensitivity, better auditory attention ability was associated with increased fatigue. These findings shed light on the perceptual, cognitive, and psychological factors underlying age-related changes in listening-related fatigue.


Subject(s)
Longevity , Speech Perception , Aged , Aging/psychology , Auditory Perception , Fatigue/epidemiology , Humans
5.
Psychol Aging ; 36(4): 504-519, 2021 Jun.
Article in English | MEDLINE | ID: mdl-34014746

ABSTRACT

Listening to speech in adverse conditions can be challenging and effortful, especially for older adults. This study examined age-related differences in effortful listening by recording changes in the task-evoked pupil response (TEPR; a physiological marker of listening effort) both at the level of sentence processing and over the entire course of a listening task. A total of 65 (32 young adults, 33 older adults) participants performed a speech recognition task in the presence of a competing talker, while moment-to-moment changes in pupil size were continuously monitored. Participants were also administered the Vanderbilt Fatigue Scale, a questionnaire assessing daily life listening-related fatigue within four domains (social, cognitive, emotional, physical). Normalized TEPRs were overall larger and more steeply rising and falling around the peak in the older versus the young adult group during sentence processing. Additionally, mean TEPRs over the course of the listening task were more stable in the older versus the young adult group, consistent with a more sustained recruitment of compensatory attentional resources to maintain task performance. No age-related differences were found in terms of total daily life listening-related fatigue; however, older adults reported higher scores than young adults within the social domain. Overall, this study provides evidence for qualitatively distinct patterns of physiological arousal between young and older adults consistent with age-related upregulation in resource allocation during listening. A more detailed understanding of age-related changes in the subjective and physiological mechanisms that underlie effortful listening will ultimately help to address complex communication needs in aging listeners. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Aging , Female , Humans , Male , Middle Aged , Speech Perception/physiology , Young Adult
6.
Psychophysiology ; 58(1): e13703, 2021 01.
Article in English | MEDLINE | ID: mdl-33031584

ABSTRACT

Effort during listening is commonly measured using the task-evoked pupil response (TEPR); a pupillometric marker of physiological arousal. However, studies to date report no association between TEPR and perceived effort. One possible reason for this is the way in which self-report effort measures are typically administered, namely as a single data point collected at the end of a testing session. Another possible reason is that TEPR might relate more closely to the experience of tiredness from listening than to effort per se. To examine these possibilities, we conducted two preregistered experiments that recorded subjective ratings of effort and tiredness from listening at multiple time points and examined their covariance with TEPR over the course of listening tasks varying in levels of acoustic and attentional demand. In both experiments, we showed a within-subject association between TEPR and tiredness from listening, but no association between TEPR and effort. The data also suggest that the effect of task difficulty on the experience of tiredness from listening may go undetected using the traditional approach of collecting a single data point at the end of a listening block. Finally, this study demonstrates the utility of a novel correlation analysis technique ("rmcorr"), which can be used to overcome statistical power constraints commonly found in the literature. Teasing apart the subjective and physiological mechanisms that underpin effortful listening is a crucial step toward addressing these difficulties in older and/or hearing-impaired individuals.


Subject(s)
Attention/physiology , Pupil/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Adolescent , Adult , Female , Humans , Male , Speech Acoustics , Young Adult
7.
J Acoust Soc Am ; 147(6): EL484, 2020 06.
Article in English | MEDLINE | ID: mdl-32611187

ABSTRACT

Event durations are perceived to be shorter under divided attention. "Time shrinkage" is thought to be due to rapid attentional switches between tasks, leading to a loss of input samples, and hence, an under-estimation of duration. However, few studies have considered whether this phenomenon applies to durations relevant to time-based phonetic categorization. In this study, participants categorized auditory stimuli varying in voice onset time (VOT) as /É¡/ or /k/. They did so under focused attention (auditory task alone) or while performing a low-level visual task at the same time (divided attention). Under divided attention, there was increased response imprecision but no bias toward hearing /É¡/, the shorter-VOT sound. It is concluded that sample loss under divided attention does not apply to the perception of phonetic contrasts within the VOT range.


Subject(s)
Speech Perception , Voice , Attention , Humans , Phonetics , Time Factors
8.
Ear Hear ; 41(4): 907-917, 2020.
Article in English | MEDLINE | ID: mdl-31702598

ABSTRACT

OBJECTIVES: Cognitive load (CL) impairs listeners' ability to comprehend sentences, recognize words, and identify speech sounds. Recent findings suggest that this effect originates in a disruption of low-level perception of acoustic details. Here, we attempted to quantify such a disruption by measuring the effect of CL (a two-back task) on pure-tone audiometry (PTA) thresholds. We also asked whether the effect of CL on PTA was greater in older adults, on account of their reduced ability to divide cognitive resources between simultaneous tasks. To specify the mechanisms and representations underlying the interface between auditory and cognitive processes, we contrasted CL requiring visual encoding with CL requiring auditory encoding. Finally, the link between the cost of performing PTA under CL, working memory, and speech-in-noise (SiN) perception was investigated and compared between younger and older participants. DESIGN: Younger and older adults (44 in each group) did a PTA test at 0.5, 1, 2, and 4 kHz pure tones under CL and no CL. CL consisted of a visual two-back task running throughout the PTA test. The two-back task involved either visual encoding of the stimuli (meaningless images) or subvocal auditory encoding (a rhyme task on written nonwords). Participants also underwent a battery of SiN tests and a working memory test (letter number sequencing). RESULTS: Younger adults showed elevated PTA thresholds under CL, but only when CL involved subvocal auditory encoding. CL had no effect when it involved purely visual encoding. In contrast, older adults showed elevated thresholds under both types of CL. When present, the PTA CL cost was broadly comparable in younger and older adults (approximately 2 dB HL). The magnitude of PTA CL cost did not correlate significantly with SiN perception or working memory in either age group. In contrast, PTA alone showed strong links to both SiN and letter number sequencing in older adults. CONCLUSIONS: The results show that CL can exert its effect at the level of hearing sensitivity. However, in younger adults, this effect is only found when CL involves auditory mental representations. When CL involves visual representations, it has virtually no impact on hearing thresholds. In older adults, interference is found in both conditions. The results suggest that hearing progresses from engaging primarily modality-specific cognition in early adulthood to engaging cognition in a more undifferentiated way in older age. Moreover, hearing thresholds measured under CL did not predict SiN perception more accurately than standard PTA thresholds.


Subject(s)
Speech Perception , Adult , Aged , Audiometry, Pure-Tone , Auditory Threshold , Cognition , Humans , Noise , Speech
9.
J Acoust Soc Am ; 146(2): 1077, 2019 08.
Article in English | MEDLINE | ID: mdl-31472597

ABSTRACT

Dual-tasking negatively impacts on speech perception by raising cognitive load (CL). Previous research has shown that CL increases reliance on lexical knowledge and decreases reliance on phonetic detail. Less is known about the effect of CL on the perception of acoustic dimensions below the phonetic level. This study tested the effect of CL on the ability to discriminate differences in duration, intensity, and fundamental frequency of a synthesized vowel. A psychophysical adaptive procedure was used to obtain just noticeable differences (JNDs) on each dimension under load and no load. Load was imposed by N-back tasks at two levels of difficulty (one-back, two-back) and under two types of load (images, nonwords). Compared to a control condition with no CL, all N-back conditions increased JNDs across the three dimensions. JNDs were also higher under two-back than one-back load. Nonword load was marginally more detrimental than image load for intensity and fundamental frequency discrimination. Overall, the decreased auditory acuity demonstrates that the effect of CL on the listening experience can be traced to distortions in the perception of core auditory dimensions.

10.
J Exp Psychol Learn Mem Cogn ; 45(1): 139-146, 2019 Jan.
Article in English | MEDLINE | ID: mdl-29952630

ABSTRACT

The hypothesis that known words can serve as anchors for discovering new words in connected speech has computational and empirical support. However, evidence for how the bootstrapping effect of known words interacts with other mechanisms of lexical acquisition, such as statistical learning, is incomplete. In 3 experiments, we investigated the consequences of introducing a known word in an artificial language with no segmentation cues other than cross-syllable transitional probabilities. We started with an artificial language containing 4 trisyllabic novel words and observed standard above-chance performance in a subsequent recognition memory task. We then replaced 1 of the 4 novel words with a real word (tomorrow) and noted improved segmentation of the other 3 novel words. This improvement was maintained when the real word was a different length to the novel words (philosophy), ruling out an explanation based on metrical expectation. The improvement was also maintained when the word was added to the 4 original novel words rather than replacing 1 of them. Together, these results show that known words in an otherwise meaningless stream serve as anchors for discovering new words. In interpreting the results, we contrast a mechanism where the lexical boost is merely the consequence of attending to the edges of known words, with a mechanism where known words enhance sensitivity to transitional probabilities more generally. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Psycholinguistics , Recognition, Psychology/physiology , Speech Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
11.
Psychol Aging ; 33(7): 1035-1044, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30247045

ABSTRACT

Statistical learning (SL) is a powerful learning mechanism that supports word segmentation and language acquisition in infants and young adults. However, little is known about how this ability changes over the life span and interacts with age-related cognitive decline. The aims of this study were to: (a) examine the effect of aging on speech segmentation by SL, and (b) explore core mechanisms underlying SL. Across four testing sessions, young, middle-aged, and older adults were exposed to continuous speech streams at two different speech rates, both with and without cognitive load. Learning was assessed using a two-alterative forced-choice task in which words from the stream were pitted against either part-words, which occurred across word boundaries in the stream, or nonwords, which never appeared in the stream. Participants also completed a battery of cognitive tests assessing working memory and executive functions. The results showed that speech segmentation by SL was remarkably resilient to aging, although age effects were visible in the more challenging conditions, namely, when words had to be discriminated from part-words, which required the formation of detailed phonological representations, and when SL was performed under cognitive load. Moreover, an analysis of the cognitive test data indicated that performance against part-words was predicted mostly by memory updating, whereas performance against nonwords was predicted mostly by working memory storage capacity. Taken together, the data show that SL relies on a combination of implicit and explicit skills, and that age effects on SL are likely to be linked to an age-related selective decline in memory updating. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Linguistics/methods , Memory, Short-Term/physiology , Speech Perception/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Aging , Female , Humans , Male , Middle Aged , Young Adult
12.
Neuroimage ; 178: 735-743, 2018 09.
Article in English | MEDLINE | ID: mdl-29902588

ABSTRACT

Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load). Performing the speech task under high load, compared to low load, resulted in decreased activity in pSTG/pMTG and increased activity in visual occipital cortex and two regions known to contribute to visual attention regulation-the superior parietal lobule (SPL) and the paracingulate and anterior cingulate gyrus (PaCG, ACG). Critically, activity in PaCG/ACG was correlated with performance in the visual task and with activity in pSTG/pMTG: Increased activity in PaCG/ACG was observed for individuals with poorer visual performance and with decreased activity in pSTG/pMTG. Moreover, activity in a pSTG/pMTG seed region showed psychophysiological interactions with areas of the PaCG/ACG, with stronger interaction in the high-load than the low-load condition. These findings show that the acoustic analysis of speech is affected by the demands of a concurrent visual task and that the PaCG/ACG plays a role in allocating cognitive resources to concurrent auditory and visual information.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Memory, Short-Term/physiology , Photic Stimulation , Visual Perception/physiology , Young Adult
13.
Mem Cognit ; 46(3): 361-369, 2018 04.
Article in English | MEDLINE | ID: mdl-29110211

ABSTRACT

It is well established that digit span in native Chinese speakers is atypically high. This is commonly attributed to a capacity for more rapid subvocal rehearsal for that group. We explored this hypothesis by testing a group of English-speaking native Mandarin speakers on digit span and word span in both Mandarin and English, together with a measure of speed of articulation for each. When compared to the performance of native English speakers, the Mandarin group proved to be superior on both digit and word spans while predictably having lower spans in English. This suggests that the Mandarin advantage is not limited to digits. Speed of rehearsal correlated with span performance across materials. However, this correlation was more pronounced for English speakers than for any of the Chinese measures. Further analysis suggested that speed of rehearsal did not provide an adequate account of differences between Mandarin and English spans or for the advantage of digits over words. Possible alternative explanations are discussed.


Subject(s)
Memory, Short-Term/physiology , Mental Recall/physiology , Multilingualism , Psycholinguistics , Speech Perception/physiology , Speech/physiology , Adult , China , Female , Humans , Male , Young Adult
14.
Atten Percept Psychophys ; 80(1): 222-241, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28975549

ABSTRACT

Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.


Subject(s)
Sound , Speech Perception/physiology , Verbal Behavior/physiology , Voice , Adolescent , Adult , Female , Humans , Male , Memory , Young Adult
15.
Lang Speech ; 60(4): 562-570, 2017 12.
Article in English | MEDLINE | ID: mdl-29216812

ABSTRACT

This study used the perceptual-migration paradigm to explore whether Mandarin tones and syllable rhymes are processed separately during Mandarin speech perception. Following the logic of illusory conjunctions, we calculated the cross-ear migration of tones, rhymes, and their combination in Chinese and English listeners. For Chinese listeners, tones migrated more than rhymes. For English listeners, the opposite pattern was found. The results lend empirical support to autosegmental theory, which claims separability and mobility between tonal and segmental representations. They also provide evidence that such representations and their involvement in perception are deeply shaped by a listener's linguistic experience.


Subject(s)
Phonetics , Pitch Discrimination , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , China , Dichotic Listening Tests , Humans , Pattern Recognition, Physiological
16.
J Neurosci ; 37(32): 7727-7736, 2017 08 09.
Article in English | MEDLINE | ID: mdl-28694336

ABSTRACT

Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise.SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex.


Subject(s)
Auditory Cortex/physiology , Auditory Cortex/physiopathology , Hearing Loss, Sensorineural/physiopathology , Noise , Perceptual Masking/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Aged , Audiometry, Speech/methods , Auditory Perception/physiology , Female , Forecasting , Hearing Loss, Noise-Induced/physiopathology , Humans , Magnetoencephalography/methods , Male , Middle Aged , Noise/adverse effects
17.
J Speech Lang Hear Res ; 60(5): 1236-1245, 2017 05 24.
Article in English | MEDLINE | ID: mdl-28492912

ABSTRACT

Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Method: Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. Results: After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. Conclusions: AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.


Subject(s)
Memory, Short-Term , Speech Perception , Adult , Aged , Audiometry, Pure-Tone , Auditory Threshold , Executive Function , Female , Humans , Linear Models , Male , Middle Aged , Neuropsychological Tests , Phonetics
18.
Atten Percept Psychophys ; 79(1): 344-351, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27604285

ABSTRACT

Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.


Subject(s)
Executive Function/physiology , Memory, Short-Term/physiology , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
19.
Q J Exp Psychol (Hove) ; 69(12): 2390-2401, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27167308

ABSTRACT

The purpose of this study was to examine the extent to which working memory resources are recruited during statistical learning (SL). Participants were asked to identify novel words in an artificial speech stream where the transitional probabilities between syllables provided the only segmentation cue. Experiments 1 and 2 demonstrated that segmentation performance improved when the speech rate was slowed down, suggesting that SL is supported by some form of active processing or maintenance mechanism that operates more effectively under slower presentation rates. In Experiment 3 we investigated the nature of this mechanism by asking participants to perform a two-back task while listening to the speech stream. Half of the participants performed a two-back rhyme task designed to engage phonological processing, whereas the other half performed a comparable two-back task on un-nameable visual shapes. It was hypothesized that if SL is dependent only upon domain-specific processes (i.e., phonological rehearsal), the rhyme task should impair speech segmentation performance more than the shape task. However, the two loads were equally disruptive to learning, as they both eradicated the benefit provided by the slow rate. These results suggest that SL is supported by working-memory processes that rely on domain-general resources.


Subject(s)
Memory, Short-Term/physiology , Probability Learning , Speech Perception/physiology , Speech/physiology , Verbal Behavior/physiology , Acoustic Stimulation , Female , Humans , Male , Phonetics
20.
J Acoust Soc Am ; 138(2): 1214-20, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26328734

ABSTRACT

Prosody facilitates perceptual segmentation of the speech stream into a sequence of words and phrases. With regard to speech timing, vowel lengthening is well established as a cue to an upcoming boundary, but listeners' exploitation of consonant lengthening for segmentation has not been systematically tested in the absence of other boundary cues. In a series of artificial language learning experiments, the impact of durational variation in consonants and vowels on listeners' extraction of novel trisyllables was examined. Language streams with systematic lengthening of word-initial consonants were better recalled than both control streams without localized lengthening and streams where word-initial syllable lengthening was confined to the vocalic rhyme. Furthermore, where vowel-consonant sequences were lengthened word-medially, listeners failed to learn the languages effectively. Thus the structural interpretation of lengthening effects depends upon their localization, in this case, a distinction between lengthening of the onset consonant and the vocalic syllable rhyme. This functional division is considered in terms of speech-rate-sensitive predictive mechanisms and listeners' expectations regarding the occurrence of syllable perceptual centres.


Subject(s)
Cues , Phonetics , Adult , Female , Humans , Language , Learning , Male , Random Allocation , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...