Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
J Neurosci ; 44(15)2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38388426

ABSTRACT

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Speech Perception/physiology , Temporal Lobe , Magnetic Resonance Imaging , Attention/physiology , Auditory Perception/physiology , Acoustic Stimulation
2.
Article in English | MEDLINE | ID: mdl-37306610

ABSTRACT

Age differences in cognitive performance have been shown to be overestimated if age-related hearing loss is not taken into account. Here, we investigated the role of age-related hearing loss on age differences in functional brain organization by assessing its impact on previously reported age differences in neural differentiation. To this end, we analyzed the data of 36 younger adults, 21 older adults with clinically normal hearing, and 21 older adults with mild-to-moderate hearing loss who had taken part in a functional localizer task comprising visual (i.e., faces, scenes) and auditory stimuli (i.e., voices, music) while undergoing functional magnetic resonance imaging. Evidence for reduced neural distinctiveness in the auditory cortex was observed only in older adults with hearing loss relative to younger adults, whereas evidence for reduced neural distinctiveness in the visual cortex was observed both in older adults with normal hearing and in older adults with hearing loss relative to younger adults. These results indicate that age-related dedifferentiation in the auditory cortex is exacerbated by age-related hearing loss.

4.
Neuroimage ; 233: 117915, 2021 06.
Article in English | MEDLINE | ID: mdl-33652144

ABSTRACT

A body of literature has demonstrated that the right auditory cortex (AC) plays a dominant role in fine pitch processing. However, our understanding is relatively limited about whether this asymmetry extends to perceptual learning of pitch. There is also a lack of causal evidence regarding the role of the right AC in pitch learning.  We addressed these points with anodal transcranial direct current stimulation (tDCS), adapting a previous behavioral study in which anodal tDCS over the right AC was shown to block improvement of a microtonal pitch pattern learning task over 3 days. To address the physiological changes associated with tDCS, we recorded MEG data simultaneously with tDCS on the first day, and measured behavioral thresholds on the following two consecutive days. We tested three groups of participants who received anodal tDCS over their right or left AC, or sham tDCS, and measured the N1m auditory evoked response before, during, and after tDCS. Our data show that anodal tDCS of the right AC disrupted pitch discrimination learning up to two days after its application, whereas learning was unaffected by left-AC or sham tDCS. Although tDCS reduced the amplitude of the N1m ipsilaterally to the stimulated hemisphere on both left and right, only right AC N1m amplitude reductions were associated with the degree to which pitch learning was disrupted. This brain-behavior relationship confirms a causal link between right AC physiological responses and fine pitch processing, and provides neurophysiological insight concerning the mechanisms of action of tDCS on the auditory system.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Learning/physiology , Magnetoencephalography/methods , Pitch Discrimination/physiology , Transcranial Direct Current Stimulation/methods , Adolescent , Adult , Female , Humans , Male , Random Allocation , Young Adult
5.
J Neurosci ; 41(12): 2713-2722, 2021 03 24.
Article in English | MEDLINE | ID: mdl-33536196

ABSTRACT

Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Magnetoencephalography/methods , Music , Speech Perception/physiology , Adult , Auditory Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Male , Psychomotor Performance/physiology , Young Adult
6.
Neurobiol Lang (Camb) ; 1(3): 268-287, 2020.
Article in English | MEDLINE | ID: mdl-37215227

ABSTRACT

Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound's amplitude envelope (i.e., syllabic rate or rhythm; 1-9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain's response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.

7.
Neuroimage ; 196: 261-268, 2019 08 01.
Article in English | MEDLINE | ID: mdl-30978494

ABSTRACT

Recent studies provide evidence for changes in audiovisual perception as well as for adaptive cross-modal auditory cortex plasticity in older individuals with high-frequency hearing impairments (presbycusis). We here investigated whether these changes facilitate the use of visual information, leading to an increased audiovisual benefit of hearing-impaired individuals when listening to speech in noise. We used a naturalistic design in which older participants with a varying degree of high-frequency hearing loss attended to running auditory or audiovisual speech in noise and detected rare target words. Passages containing only visual speech served as a control condition. Simultaneously acquired scalp electroencephalography (EEG) data were used to study cortical speech tracking. Target word detection accuracy was significantly increased in the audiovisual as compared to the auditory listening condition. The degree of this audiovisual enhancement was positively related to individual high-frequency hearing loss and subjectively reported listening effort in challenging daily life situations, which served as a subjective marker of hearing problems. On the neural level, the early cortical tracking of the speech envelope was enhanced in the audiovisual condition. Similar to the behavioral findings, individual differences in the magnitude of the enhancement were positively associated with listening effort ratings. Our results therefore suggest that hearing-impaired older individuals make increased use of congruent visual information to compensate for the degraded auditory input.


Subject(s)
Cerebral Cortex/physiopathology , Noise , Presbycusis/physiopathology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Aged , Auditory Threshold , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments , Photic Stimulation
8.
Cereb Cortex ; 29(8): 3253-3265, 2019 07 22.
Article in English | MEDLINE | ID: mdl-30137239

ABSTRACT

Musical training has been demonstrated to benefit speech-in-noise perception. It is however unknown whether this effect translates to selective listening in cocktail party situations, and if so what its neural basis might be. We investigated this question using magnetoencephalography-based speech envelope reconstruction and a sustained selective listening task, in which participants with varying amounts of musical training attended to 1 of 2 speech streams while detecting rare target words. Cortical frequency-following responses (FFR) and auditory working memory were additionally measured to dissociate musical training-related effects on low-level auditory processing versus higher cognitive function. Results show that the duration of musical training is associated with a reduced distracting effect of competing speech on target detection accuracy. Remarkably, more musical training was related to a robust neural tracking of both the to-be-attended and the to-be-ignored speech stream, up until late cortical processing stages. Musical training-related increases in FFR power were associated with a robust speech tracking in auditory sensory areas, whereas training-related differences in auditory working memory were linked to an increased representation of the to-be-ignored stream beyond auditory cortex. Our findings suggest that musically trained persons can use additional information about the distracting stream to limit interference by competing speech.


Subject(s)
Auditory Cortex/physiology , Cognition/physiology , Memory, Short-Term/physiology , Music , Speech Perception/physiology , Acoustic Stimulation , Adult , Cerebral Cortex/physiology , Female , Humans , Magnetoencephalography , Male , Noise , Young Adult
9.
J Neurosci ; 37(47): 11505-11516, 2017 11 22.
Article in English | MEDLINE | ID: mdl-29061698

ABSTRACT

Listening selectively to one out of several competing speakers in a "cocktail party" situation is a highly demanding task. It relies on a widespread cortical network, including auditory sensory, but also frontal and parietal brain regions involved in controlling auditory attention. Previous work has shown that, during selective listening, ongoing neural activity in auditory sensory areas is dominated by the attended speech stream, whereas competing input is suppressed. The relationship between these attentional modulations in the sensory tracking of the attended speech stream and frontoparietal activity during selective listening is, however, not understood. We studied this question in young, healthy human participants (both sexes) using concurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech streams had to be attended selectively. An EEG-based speech envelope reconstruction method was applied to assess the strength of the cortical tracking of the to-be-attended and the to-be-ignored stream during selective listening. Our results show that individual speech envelope reconstruction accuracies obtained for the to-be-attended speech stream were positively correlated with the amplitude of sustained BOLD responses in the right temporoparietal junction, a core region of the ventral attention network. This brain region further showed task-related functional connectivity to secondary auditory cortex and regions of the frontoparietal attention network, including the intraparietal sulcus and the inferior frontal gyrus. This suggests that the right temporoparietal junction is involved in controlling attention during selective listening, allowing for a better cortical tracking of the attended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneously talking speakers in a "cocktail party" situation is a highly demanding task. It activates a widespread network of auditory sensory and hierarchically higher frontoparietal brain regions. However, how these different processing levels interact during selective listening is not understood. Here, we investigated this question using fMRI and concurrently acquired scalp EEG. We found that activation levels in the right temporoparietal junction correlate with the sensory representation of a selectively attended speech stream. In addition, this region showed significant functional connectivity to both auditory sensory and other frontoparietal brain areas during selective listening. This suggests that the right temporoparietal junction contributes to controlling selective auditory attention in "cocktail party" situations.


Subject(s)
Discrimination, Psychological , Parietal Lobe/physiology , Speech Perception , Temporal Lobe/physiology , Adult , Attention , Cognition , Electroencephalography , Female , Humans , Magnetic Resonance Imaging , Male , Parietal Lobe/diagnostic imaging , Temporal Lobe/diagnostic imaging
10.
Sci Rep ; 7(1): 10043, 2017 08 30.
Article in English | MEDLINE | ID: mdl-28855675

ABSTRACT

Previous studies have reported increased cross-modal auditory and visual cortical activation in cochlear implant (CI) users, suggesting cross-modal reorganization of both visual and auditory cortices in CI users as a consequence of sensory deprivation and restoration. How these processes affect the functional connectivity of the auditory and visual system in CI users is however unknown. We here investigated task-induced intra-modal functional connectivity between hemispheres for both visual and auditory cortices and cross-modal functional connectivity between visual and auditory cortices using functional near infrared spectroscopy in post-lingually deaf CI users and age-matched normal hearing controls. Compared to controls, CI users exhibited decreased intra-modal functional connectivity between hemispheres and increased cross-modal functional connectivity between visual and left auditory cortices for both visual and auditory stimulus processing. Importantly, the difference between cross-modal functional connectivity for visual and for auditory stimuli correlated with speech recognition outcome in CI users. Higher cross-modal connectivity for auditory than for visual stimuli was associated with better speech recognition abilities, pointing to a new pattern of functional reorganization that is related to successful hearing restoration with a CI.


Subject(s)
Auditory Perception , Cerebral Cortex/physiology , Cochlear Implants/adverse effects , Connectome , Visual Perception , Adult , Aged , Electroencephalography , Evoked Potentials , Female , Humans , Male , Middle Aged , Spectroscopy, Near-Infrared
11.
Front Hum Neurosci ; 11: 294, 2017.
Article in English | MEDLINE | ID: mdl-28638329

ABSTRACT

Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.

12.
Cortex ; 86: 109-122, 2017 01.
Article in English | MEDLINE | ID: mdl-27930898

ABSTRACT

Previous work compellingly demonstrates a crossmodal plastic reorganization of auditory cortex in deaf individuals, leading to increased neural responses to non-auditory sensory input. Recent data indicate that crossmodal adaptive plasticity is not restricted to severe hearing impairments, but may also occur as a result of high-frequency hearing loss in older adults and affect audiovisual processing in these subjects. We here used functional magnetic resonance imaging (fMRI) to study the effect of hearing loss in older adults on auditory cortex response patterns as well as on functional connectivity between auditory and visual cortex during audiovisual processing. Older participants with a varying degree of high frequency hearing loss performed an auditory stimulus categorization task, in which they had to categorize frequency-modulated (FM) tones presented alone or in the context of matching or non-matching visual motion. A motion only condition served as control for a visual take-over of auditory cortex. While the individual hearing status did not affect auditory cortex responses to auditory, visual, or audiovisual stimuli, we observed a significant hearing loss-related increase in functional connectivity between auditory cortex and the right motion-sensitive visual area MT+ when processing matching audiovisual input. Hearing loss also modulated resting state connectivity between right area MT+ and parts of the left auditory cortex, suggesting the existence of permanent, task-independent changes in coupling between visual and auditory sensory areas with an increasing degree of hearing loss. Our data thus indicate that hearing loss impacts on functional connectivity between sensory cortices in older adults.


Subject(s)
Auditory Cortex/physiopathology , Auditory Perception/physiology , Hearing Loss/physiopathology , Nerve Net/physiopathology , Visual Cortex/physiopathology , Acoustic Stimulation , Aged , Auditory Cortex/diagnostic imaging , Brain Mapping , Female , Hearing Loss/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Neural Pathways/diagnostic imaging , Neural Pathways/physiopathology , Neuronal Plasticity/physiology , Persons With Hearing Impairments , Photic Stimulation , Visual Cortex/diagnostic imaging , Visual Perception/physiology
13.
Front Hum Neurosci ; 10: 473, 2016.
Article in English | MEDLINE | ID: mdl-27708570

ABSTRACT

Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.

14.
Hum Brain Mapp ; 37(10): 3400-16, 2016 10.
Article in English | MEDLINE | ID: mdl-27280466

ABSTRACT

The cortical processing of changes in auditory input involves auditory sensory regions as well as different frontoparietal brain networks. The spatiotemporal dynamics of the activation spread across these networks has, however, not been investigated in detail so far. We here approached this issue using concurrent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), providing us with simultaneous information on both the spatial and temporal patterns of change-related activity. We applied an auditory stimulus categorization task with switching categorization rules, allowing to analyze change-related responses as a function of the changing sound feature (pitch or duration) and the task relevance of the change. Our data show the successive progression of change-related activity from regions involved in early change detection to the ventral and dorsal attention networks, and finally the central executive network. While early change detection was found to recruit feature-specific networks involving auditory sensory but also frontal and parietal brain regions, the later spread of activity across the frontoparietal attention and executive networks was largely independent of the changing sound feature, suggesting the existence of a general feature-independent processing pathway of change-related information. Task relevance did not modulate early auditory sensory processing, but was mainly found to affect processing in frontal brain regions. Hum Brain Mapp 37:3400-3416, 2016. © 2016 Wiley Periodicals, Inc.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Brain/physiology , Executive Function/physiology , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Brain/diagnostic imaging , Brain Mapping , Electroencephalography , Female , Humans , Judgment/physiology , Magnetic Resonance Imaging , Male , Multimodal Imaging , Neuropsychological Tests , Reaction Time , Time Perception/physiology , Young Adult
15.
Hear Res ; 316: 28-36, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25080386

ABSTRACT

Recent electrophysiological studies have provided evidence that changes in multisensory processing in auditory cortex cannot only be observed following extensive hearing loss, but also in moderately hearing-impaired subjects. How the reduced auditory input affects audio-visual interactions is however largely unknown. Here we used a cross-modal distraction paradigm to investigate multisensory processing in elderly participants with an age-related high-frequency hearing loss as compared to young and elderly subjects with normal hearing. During the experiment, participants were simultaneously presented with independent streams of auditory and visual input and were asked to categorize either the auditory or visual information while ignoring the other modality. Unisensory sequences without any cross-modal input served as control conditions to assure that all participants were able to perform the task. While all groups performed similarly in these unisensory conditions, hearing-impaired participants showed significantly increased error rates when confronted with distracting cross-modal stimulation. This effect could be observed in both the auditory and the visual task. Supporting these findings, an additional regression analysis indicted that the degree of high-frequency hearing loss significantly modulates cross-modal visual distractibility in the auditory task. These findings provide new evidence that already a moderate sub-clinical hearing loss, a common phenomenon in the elderly population, affects the processing of audio-visual information.


Subject(s)
Aging , Presbycusis/physiopathology , Acoustic Stimulation , Adult , Aged , Auditory Cortex/pathology , Cognition Disorders , Electrophysiology , Female , Hearing , Hearing Loss, High-Frequency , Humans , Male , Middle Aged , Neurons/physiology , Regression Analysis , Reproducibility of Results , Vision, Ocular , Young Adult
16.
Front Hum Neurosci ; 7: 842, 2013.
Article in English | MEDLINE | ID: mdl-24367318

ABSTRACT

Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent) depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

17.
Neuroreport ; 24(15): 841-5, 2013 Oct 23.
Article in English | MEDLINE | ID: mdl-23995293

ABSTRACT

Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials, Auditory , Music , Pitch Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
18.
J Neurophysiol ; 110(8): 1860-8, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23904492

ABSTRACT

Associative learning studies have shown that the anticipation of reward and punishment shapes the representation of sensory stimuli, which is further modulated by dopamine. Less is known about whether and how reward delivery activates sensory cortices and the role of dopamine at that time point of learning. We used an appetitive instrumental learning task in which participants had to learn that a specific class of frequency-modulated tones predicted a monetary reward following fast and correct responses in a succeeding reaction time task. These fMRI data were previously analyzed regarding the effect of reward anticipation, but here we focused on neural activity to the reward outcome relative to the reward expectation and tested whether such activation in the reward reception phase is modulated by L-DOPA. We analyzed neural responses at the time point of reward outcome under three different conditions: 1) when a reward was expected and received, 2) when a reward was expected but not received, and 3) when a reward was not expected and not received. Neural activity in auditory cortex was enhanced during feedback delivery either when an expected reward was received or when the expectation of obtaining no reward was correct. This differential neural activity in auditory cortex was only seen in subjects who learned the reward association and not under dopaminergic modulation. Our data provide evidence that auditory cortices are active at the time point of reward outcome. However, responses are not dependent on the reward itself but on whether the outcome confirmed the subject's expectations.


Subject(s)
Anticipation, Psychological , Auditory Cortex/physiology , Feedback, Sensory , Reward , Adolescent , Adult , Auditory Cortex/drug effects , Female , Humans , Learning , Levodopa/pharmacology , Male
19.
Neuroimage ; 75: 155-164, 2013 Jul 15.
Article in English | MEDLINE | ID: mdl-23466938

ABSTRACT

Change deafness describes the failure to perceive even intense changes within complex auditory input, if the listener does not attend to the changing sound. Remarkably, previous psychophysical data provide evidence that this effect occurs independently of successful stimulus encoding, indicating that undetected changes are processed to some extent in auditory cortex. Here we investigated cortical representations of detected and undetected auditory changes using electroencephalographic (EEG) recordings and a change deafness paradigm. We applied a one-shot change detection task, in which participants listened successively to three complex auditory scenes, each of them consisting of six simultaneously presented auditory streams. Listeners had to decide whether all scenes were identical or whether the pitch of one stream was changed between the last two presentations. Our data show significantly increased middle-latency Nb responses for both detected and undetected changes as compared to no-change trials. In contrast, only successfully detected changes were associated with a later mismatch response in auditory cortex, followed by increased N2, P3a and P3b responses, originating from hierarchically higher non-sensory brain regions. These results strengthen the view that undetected changes are successfully encoded at sensory level in auditory cortex, but fail to trigger later change-related cortical responses that lead to conscious perception of change.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Signal Processing, Computer-Assisted , Young Adult
20.
Hum Brain Mapp ; 34(11): 2841-51, 2013 Nov.
Article in English | MEDLINE | ID: mdl-22610479

ABSTRACT

Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies.


Subject(s)
Auditory Cortex/physiology , Conditioning, Operant/physiology , Learning/physiology , Neuronal Plasticity/physiology , Acoustic Stimulation , Adolescent , Adult , Brain Mapping , Dopamine/physiology , Echo-Planar Imaging , Female , Humans , Image Processing, Computer-Assisted , Learning Curve , Magnetic Resonance Imaging , Male , Nucleus Accumbens/physiology , Reward , Synaptic Transmission/physiology , Ventral Tegmental Area/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...