Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 97
Filter
1.
Trends Hear ; 28: 23312165241236041, 2024.
Article in English | MEDLINE | ID: mdl-38545654

ABSTRACT

Many older adults live with some form of hearing loss and have difficulty understanding speech in the presence of background sound. Experiences resulting from such difficulties include increased listening effort and fatigue. Social interactions may become less appealing in the context of such experiences, and age-related hearing loss is associated with an increased risk of social isolation and associated negative psychosocial health outcomes. However, the precise relationship between age-related hearing loss and social isolation is not well described. Here, we review the literature and synthesize existing work from different domains to propose a framework with three conceptual anchor stages to describe the relation between hearing loss and social isolation: within-situation disengagement from listening, social withdrawal, and social isolation. We describe the distinct characteristics of each stage and suggest potential interventions to mitigate negative impacts of hearing loss on social lives and health. We close by outlining potential implications for researchers and clinicians.


Subject(s)
Deafness , Presbycusis , Speech Perception , Humans , Aged , Presbycusis/diagnosis , Social Isolation , Speech
2.
Neuropsychologia ; 186: 108584, 2023 Jul 29.
Article in English | MEDLINE | ID: mdl-37169066

ABSTRACT

Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.


Subject(s)
Semantics , Speech Perception , Humans , Perceptual Masking/physiology , Noise , Acoustics , Speech Perception/physiology , Speech Intelligibility/physiology , Acoustic Stimulation
3.
Hear Res ; 429: 108704, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36701896

ABSTRACT

Speech is more intelligible when it is spoken by familiar than unfamiliar people. If this benefit arises because key voice characteristics like perceptual correlates of fundamental frequency or vocal tract length (VTL) are more accurately represented for familiar voices, listeners may be able to discriminate smaller manipulations to such characteristics for familiar than unfamiliar voices. We measured participants' (N = 17) thresholds for discriminating pitch (correlate of fundamental frequency, or glottal pulse rate) and formant spacing (correlate of VTL; 'VTL-timbre') for voices that were familiar (participants' friends) and unfamiliar (other participants' friends). As expected, familiar voices were more intelligible. However, discrimination thresholds were no smaller for the same familiar voices. The size of the intelligibility benefit for a familiar over an unfamiliar voice did not relate to the difference in discrimination thresholds for the same voices. Also, the familiar-voice intelligibility benefit was just as large following perceptible manipulations to pitch and VTL-timbre. These results are more consistent with cognitive accounts of speech perception than traditional accounts that predict better discrimination.


Subject(s)
Speech Perception , Voice , Humans , Speech , Cognition , Heart Rate
4.
Neuroimage ; 268: 119883, 2023 03.
Article in English | MEDLINE | ID: mdl-36657693

ABSTRACT

Listening in everyday life requires attention to be deployed dynamically - when listening is expected to be difficult and when relevant information is expected to occur - to conserve mental resources. Conserving mental resources may be particularly important for older adults who often experience difficulties understanding speech. In the current study, we use electro- and magnetoencephalography to investigate the neural and behavioral mechanics of attention regulation during listening and the effects that aging has on these. We first show in younger adults (17-31 years) that neural alpha oscillatory activity indicates when in time attention is deployed (Experiment 1) and that deployment depends on listening difficulty (Experiment 2). Experiment 3 investigated age-related changes in auditory attention regulation. Middle-aged and older adults (54-72 years) show successful attention regulation but appear to utilize timing information differently compared to younger adults (20-33 years). We show a notable age-group dissociation in recruited brain regions. In younger adults, superior parietal cortex underlies alpha power during attention regulation, whereas, in middle-aged and older adults, alpha power emerges from more ventro-lateral areas (posterior temporal cortex). This difference in the sources of alpha activity between age groups only occurred during task performance and was absent during rest (Experiment S1). In sum, our study suggests that middle-aged and older adults employ different neural control strategies compared to younger adults to regulate attention in time under listening challenges.


Subject(s)
Aging , Speech Perception , Middle Aged , Humans , Aged , Aging/physiology , Auditory Perception/physiology , Brain/physiology , Magnetoencephalography , Temporal Lobe , Speech Perception/physiology
5.
Hear Res ; 428: 108677, 2023 02.
Article in English | MEDLINE | ID: mdl-36580732

ABSTRACT

Perception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency. Audiometric pure-tone average thresholds were elevated in older compared to younger adults, indicating subclinical hearing impairment in the recruited older-adult sample. Neural responses to sound onset (independent of temporal modulations) were increased in magnitude in older compared to younger adults, suggesting hyperresponsivity and a loss of inhibition in the aged auditory system. Analyses of neural activity to modulations revealed greater neural synchronization with amplitude, frequency, and combined amplitude-frequency modulations for older compared to younger adults. This potentiated response generalized across different degrees of temporal regularity (sinusoidal and non-sinusoidal), although neural synchronization was generally lower for non-sinusoidal modulation. Despite greater synchronization, sustained neural activity was reduced in older compared to younger adults for sounds modulated both sinusoidally and non-sinusoidally in frequency. Our results suggest age differences in the sensitivity of the auditory system to features present in speech and other natural sounds.


Subject(s)
Auditory Perception , Hearing Loss , Humans , Aged , Auditory Perception/physiology , Sound , Magnetoencephalography , Acoustic Stimulation/methods
6.
J Acoust Soc Am ; 152(1): 31, 2022 07.
Article in English | MEDLINE | ID: mdl-35931555

ABSTRACT

Pitch discrimination is better for complex tones than pure tones, but how pitch discrimination differs between natural and artificial sounds is not fully understood. This study compared pitch discrimination thresholds for flat-spectrum harmonic complex tones with those for natural sounds played by musical instruments of three different timbres (violin, trumpet, and flute). To investigate whether natural familiarity with sounds of particular timbres affects pitch discrimination thresholds, this study recruited non-musicians and musicians who were trained on one of the three instruments. We found that flautists and trumpeters could discriminate smaller differences in pitch for artificial flat-spectrum tones, despite their unfamiliar timbre, than for sounds played by musical instruments, which are regularly heard in everyday life (particularly by musicians who play those instruments). Furthermore, thresholds were no better for the instrument a musician was trained to play than for other instruments, suggesting that even extensive experience listening to and producing sounds of particular timbres does not reliably improve pitch discrimination thresholds for those timbres. The results show that timbre familiarity provides minimal improvements to auditory acuity, and physical acoustics (e.g., the presence of equal-amplitude harmonics) determine pitch discrimination thresholds more than does experience with natural sounds and timbre-specific training.


Subject(s)
Music , Pitch Discrimination , Auditory Perception , Discrimination, Psychological , Pitch Perception , Recognition, Psychology
7.
J Neurosci ; 42(23): 4619-4628, 2022 06 08.
Article in English | MEDLINE | ID: mdl-35508382

ABSTRACT

Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.


Subject(s)
Hearing Loss , Speech Perception , Cognition , Female , Humans , Male , Noise , Speech Intelligibility/physiology , Speech Perception/physiology , Temporal Lobe/physiology
8.
Sci Rep ; 12(1): 5898, 2022 04 07.
Article in English | MEDLINE | ID: mdl-35393472

ABSTRACT

Fluctuating background sounds facilitate speech intelligibility by providing speech 'glimpses' (masking release). Older adults benefit less from glimpses, but masking release is typically investigated using isolated sentences. Recent work indicates that using engaging, continuous speech materials (e.g., spoken stories) may qualitatively alter speech-in-noise listening. Moreover, neural sensitivity to different amplitude envelope profiles (ramped, damped) changes with age, but whether this affects speech listening is unknown. In three online experiments, we investigate how masking release in younger and older adults differs for masked sentences and stories, and how speech intelligibility varies with masker amplitude profile. Intelligibility was generally greater for damped than ramped maskers. Masking release was reduced in older relative to younger adults for disconnected sentences, and stories with a randomized sentence order. Critically, when listening to stories with an engaging and coherent narrative, older adults demonstrated equal or greater masking release compared to younger adults. Older adults thus appear to benefit from 'glimpses' as much as, or more than, younger adults when the speech they are listening to follows a coherent topical thread. Our results highlight the importance of cognitive and motivational factors for speech understanding, and suggest that previous work may have underestimated speech-listening abilities in older adults.


Subject(s)
Perceptual Masking , Speech Perception , Auditory Perception , Noise , Speech Intelligibility
9.
J Cogn Neurosci ; 34(6): 933-950, 2022 05 02.
Article in English | MEDLINE | ID: mdl-35258555

ABSTRACT

Older people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far. We recruit young, normal-hearing human adults (both sexes) and investigate how speech intelligibility and engagement during naturalistic story listening is affected by the level of acoustic masking (12-talker babble) at different signal-to-noise ratios (SNRs). In , we observed that word-report scores were above 80% for all but the lowest SNR (-3 dB SNR) we tested, at which performance dropped to 54%. In , we calculated intersubject correlation (ISC) using EEG data to identify dynamic spatial patterns of shared neural activity evoked by the stories. ISC has been used as a neural measure of participants' engagement with naturalistic materials. Our results show that ISC was stable across all but the lowest SNRs, despite reduced speech intelligibility. Comparing ISC and intelligibility demonstrated that word-report performance declined more strongly with decreasing SNR compared to ISC. Our measure of neural engagement suggests that individuals remain engaged in story listening despite missing words because of background noise. Our work provides a potentially fruitful approach to investigate listener engagement with naturalistic, spoken stories that may be used to investigate (dis)engagement in older adults with hearing impairment.


Subject(s)
Speech Perception , Acoustics , Aged , Auditory Perception , Female , Humans , Male , Noise , Speech Intelligibility , Speech Perception/physiology
10.
Neurobiol Aging ; 109: 1-10, 2022 01.
Article in English | MEDLINE | ID: mdl-34634748

ABSTRACT

Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.


Subject(s)
Aging/pathology , Aging/physiology , Auditory Cortex/pathology , Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/pathology , Sound , Acoustic Stimulation , Adult , Aged , Female , Hearing , Humans , Magnetoencephalography , Male , Middle Aged , Speech , Young Adult
11.
Cognition ; 218: 104949, 2022 01.
Article in English | MEDLINE | ID: mdl-34768123

ABSTRACT

Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones.


Subject(s)
Music , Acoustic Stimulation , Auditory Perception , Humans , Knowledge , Learning , Pitch Perception , Recognition, Psychology
12.
Sci Rep ; 11(1): 22581, 2021 11 19.
Article in English | MEDLINE | ID: mdl-34799632

ABSTRACT

Optimal perception requires adaptation to sounds in the environment. Adaptation involves representing the acoustic stimulation history in neural response patterns, for example, by altering response magnitude or latency as sound-level context changes. Neurons in the auditory brainstem of rodents are sensitive to acoustic stimulation history and sound-level context (often referred to as sensitivity to stimulus statistics), but the degree to which the human brainstem exhibits such neural adaptation is unclear. In six electroencephalography experiments with over 125 participants, we demonstrate that the response latency of the human brainstem is sensitive to the history of acoustic stimulation over a few tens of milliseconds. We further show that human brainstem responses adapt to sound-level context in, at least, the last 44 ms, but that neural sensitivity to sound-level context decreases when the time window over which acoustic stimuli need to be integrated becomes wider. Our study thus provides evidence of adaptation to sound-level context in the human brainstem and of the timescale over which sound-level information affects neural responses to sound. The research delivers an important link to studies on neural adaptation in non-human animals.


Subject(s)
Auditory Cortex/physiology , Brain Stem/physiology , Electroencephalography/methods , Neurons/metabolism , Acoustic Stimulation , Acoustics , Adolescent , Adult , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Models, Neurological , Perception , Sound , Young Adult
13.
Neuroimage ; 238: 118238, 2021 09.
Article in English | MEDLINE | ID: mdl-34098064

ABSTRACT

Repeating structures forming regular patterns are common in sounds. Learning such patterns may enable accurate perceptual organization. In five experiments, we investigated the behavioral and neural signatures of rapid perceptual learning of regular sound patterns. We show that recurring (compared to novel) patterns are detected more quickly and increase sensitivity to pattern deviations and to the temporal order of pattern onset relative to a visual stimulus. Sustained neural activity reflected perceptual learning in two ways. Firstly, sustained activity increased earlier for recurring than novel patterns when participants attended to sounds, but not when they ignored them; this earlier increase mirrored the rapid perceptual learning we observed behaviorally. Secondly, the magnitude of sustained activity was generally lower for recurring than novel patterns, but only for trials later in the experiment, and independent of whether participants attended to or ignored sounds. The late manifestation of sustained activity reduction suggests that it is not directly related to rapid perceptual learning, but to a mechanism that does not require attention to sound. In sum, we demonstrate that the latency of sustained activity reflects rapid perceptual learning of auditory patterns, while the magnitude may reflect a result of learning, such as better prediction of learned auditory patterns.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Frontal Lobe/physiology , Pattern Recognition, Physiological/physiology , Acoustic Stimulation , Adult , Brain Mapping , Cues , Electroencephalography , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology , Young Adult
14.
Psychol Sci ; 32(6): 903-915, 2021 06.
Article in English | MEDLINE | ID: mdl-33979256

ABSTRACT

When people listen to speech in noisy places, they can understand more words spoken by someone familiar, such as a friend or partner, than someone unfamiliar. Yet we know little about how voice familiarity develops over time. We exposed participants (N = 50) to three voices for different lengths of time (speaking 88, 166, or 478 sentences during familiarization and training). These previously heard voices were recognizable and more intelligible when presented with a competing talker than novel voices-even the voice previously heard for the shortest duration. However, recognition and intelligibility improved at different rates with longer exposures. Whereas recognition was similar for all previously heard voices, intelligibility was best for the voice that had been heard most extensively. The speech-intelligibility benefit for the most extensively heard voice (10%-15%) is as large as that reported for voices that are naturally very familiar (friends and spouses)-demonstrating that the intelligibility of a voice can be improved substantially after only an hour of training.


Subject(s)
Speech Perception , Voice , Humans , Speech Intelligibility , Voice Recognition , Voice Training
15.
Neuroimage ; 237: 118107, 2021 08 15.
Article in English | MEDLINE | ID: mdl-33933598

ABSTRACT

When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.


Subject(s)
Functional Neuroimaging , Recognition, Psychology/physiology , Social Perception , Speech Intelligibility/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Aged , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Voice , Young Adult
16.
J Neurosci ; 41(23): 5045-5055, 2021 06 09.
Article in English | MEDLINE | ID: mdl-33903222

ABSTRACT

Many older listeners have difficulty understanding speech in noise, when cues to speech-sound identity are less redundant. The amplitude envelope of speech fluctuates dramatically over time, and features such as the rate of amplitude change at onsets (attack) and offsets (decay), signal critical information about the identity of speech sounds. Aging is also thought to be accompanied by increases in cortical excitability, which may differentially alter sensitivity to envelope dynamics. Here, we recorded electroencephalography in younger and older human adults (of both sexes) to investigate how aging affects neural synchronization to 4 Hz amplitude-modulated noises with different envelope shapes (ramped: slow attack and sharp decay; damped: sharp attack and slow decay). We observed that subcortical responses did not differ between age groups, whereas older compared with younger adults exhibited larger cortical responses to sound onsets, consistent with an increase in auditory cortical excitability. Neural activity in older adults synchronized more strongly to rapid-onset, slow-offset (damped) envelopes, was less sinusoidal, and was more peaked. Younger adults demonstrated the opposite pattern, showing stronger synchronization to slow-onset, rapid-offset (ramped) envelopes, as well as a more sinusoidal neural response shape. The current results suggest that age-related changes in the excitability of auditory cortex alter responses to envelope dynamics. This may be part of the reason why older adults experience difficulty understanding speech in noise.SIGNIFICANCE STATEMENT Many middle-aged and older adults report difficulty understanding speech when there is background noise, which can trigger social withdrawal and negative psychosocial health outcomes. The difficulty may be related to age-related changes in how the brain processes temporal sound features. We tested younger and older people on their sensitivity to different envelope shapes, using EEG. Our results demonstrate that aging is associated with heightened sensitivity to sounds with a sharp attack and gradual decay, and sharper neural responses that deviate from the sinusoidal features of the stimulus, perhaps reflecting increased excitability in the aged auditory cortex. Altered responses to temporal sound features may be part of the reason why older adults often experience difficulty understanding speech in social situations.


Subject(s)
Aging/physiology , Auditory Cortex/physiology , Auditory Perception/physiology , Perceptual Masking/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Noise , Young Adult
17.
Cereb Cortex ; 31(6): 2952-2967, 2021 05 10.
Article in English | MEDLINE | ID: mdl-33511976

ABSTRACT

It is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed 2 human functional magnetic resonance imaging studies involving separate delayed movement tasks and focused on premovement neural activity in early auditory cortex, given the area's direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents. We show that effector-specific information (i.e., movements of the left vs. right hand in Experiment 1 and movements of the hand vs. eye in Experiment 2) can be decoded, well before movement, from neural activity in early auditory cortex. We find that this motor-related information is encoded in a separate subregion of auditory cortex than sensory-related information and is present even when movements are cued visually instead of auditorily. These findings suggest that action planning, in addition to preparing the motor system for movement, involves selectively modulating primary sensory areas based on the intended action.


Subject(s)
Acoustic Stimulation/methods , Anticipation, Psychological/physiology , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Young Adult
18.
Trends Hear ; 24: 2331216520967850, 2020.
Article in English | MEDLINE | ID: mdl-33143565

ABSTRACT

Comprehension of speech masked by background sound requires increased cognitive processing, which makes listening effortful. Research in hearing has focused on such challenging listening experiences, in part because they are thought to contribute to social withdrawal in people with hearing impairment. Research has focused less on positive listening experiences, such as enjoyment, despite their potential importance in motivating effortful listening. Moreover, the artificial speech materials-such as disconnected, brief sentences-commonly used to investigate speech intelligibility and listening effort may be ill-suited to capture positive experiences when listening is challenging. Here, we investigate how listening to naturalistic spoken stories under acoustic challenges influences the quality of listening experiences. We assess absorption (the feeling of being immersed/engaged in a story), enjoyment, and listening effort and show that (a) story absorption and enjoyment are only minimally affected by moderate speech masking although listening effort increases, (b) thematic knowledge increases absorption and enjoyment and reduces listening effort when listening to a story presented in multitalker babble, and (c) absorption and enjoyment increase and effort decreases over time as individuals listen to several stories successively in multitalker babble. Our research indicates that naturalistic, spoken stories can reveal several concurrent listening experiences and that expertise in a topic can increase engagement and reduce effort. Our work also demonstrates that, although listening effort may increase with speech masking, listeners may still find the experience both absorbing and enjoyable.


Subject(s)
Pleasure , Speech Perception , Auditory Perception , Hearing , Humans , Speech Intelligibility
19.
Hear Res ; 398: 108080, 2020 12.
Article in English | MEDLINE | ID: mdl-33038827

ABSTRACT

Hearing loss is associated with changes at the peripheral, subcortical, and cortical auditory stages. Research often focuses on these stages in isolation, but peripheral damage has cascading effects on central processing, and different stages are interconnected through extensive feedforward and feedback projections. Accordingly, assessment of the entire auditory system is needed to understand auditory pathology. Using a novel stimulus paired with electroencephalography in young, normal-hearing adults, we assess neural function at multiple stages of the auditory pathway simultaneously. We employ click trains that repeatedly accelerate then decelerate (3.5 Hz click-rate-modulation) introducing varying inter-click-intervals (4 to 40 ms). We measured the amplitude of cortical potentials, and the latencies and amplitudes of Waves III and V of the auditory brainstem response (ABR), to clicks as a function of preceding inter-click-interval. This allowed us to assess cortical processing of click-rate-modulation, as well as adaptation and neural recovery time in subcortical structures (probably cochlear nuclei and inferior colliculi). Subcortical adaptation to inter-click intervals was reflected in longer latencies. Cortical responses to the 3.5 Hz modulation included phase-locking, probably originating from auditory cortex, and sustained activity likely originating from higher-level cortices. We did not observe any correlations between subcortical and cortical responses. By recording neural responses from different stages of the auditory system simultaneously, we can study functional relationships among levels of the auditory system, which may provide a new and helpful window on hearing and hearing impairment.


Subject(s)
Auditory Cortex , Hearing Loss , Acoustic Stimulation , Auditory Pathways , Evoked Potentials, Auditory, Brain Stem , Hearing , Humans
20.
Trends Hear ; 24: 2331216520964068, 2020.
Article in English | MEDLINE | ID: mdl-33124518

ABSTRACT

Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation-a physiological index of cognitive demand-while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at -2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background.


Subject(s)
Semantics , Speech Perception , Acoustic Stimulation , Acoustics , Humans , Pupil , Speech Intelligibility
SELECTION OF CITATIONS
SEARCH DETAIL
...