Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 69
Filter
1.
J Acoust Soc Am ; 153(1): 286, 2023 01.
Article in English | MEDLINE | ID: mdl-36732241

ABSTRACT

Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.


Subject(s)
Memory, Short-Term , Speech Perception , Adult , Humans , Speech , Individuality , Audiometry, Pure-Tone
2.
J Neurosci ; 41(35): 7449-7460, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34341154

ABSTRACT

During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cerebral Cortex/physiology , Imagination/physiology , Motivation/physiology , Music/psychology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Learning/physiology , Male , Markov Chains , Occupations , Young Adult
3.
J Neurosci ; 41(35): 7435-7448, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34341155

ABSTRACT

Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Frontal Lobe/physiology , Imagination/physiology , Motivation/physiology , Music/psychology , Acoustic Stimulation , Adult , Electroencephalography , Electromyography , Evoked Potentials/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Markov Chains , Occupations , Symbolism , Young Adult
4.
Front Neurosci ; 15: 673401, 2021.
Article in English | MEDLINE | ID: mdl-34421512

ABSTRACT

Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRF env ). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.

5.
Trends Hear ; 25: 2331216520978029, 2021.
Article in English | MEDLINE | ID: mdl-33620023

ABSTRACT

Spectrotemporal modulations (STM) are essential features of speech signals that make them intelligible. While their encoding has been widely investigated in neurophysiology, we still lack a full understanding of how STMs are processed at the behavioral level and how cochlear hearing loss impacts this processing. Here, we introduce a novel methodological framework based on psychophysical reverse correlation deployed in the modulation space to characterize the mechanisms underlying STM detection in noise. We derive perceptual filters for young normal-hearing and older hearing-impaired individuals performing a detection task of an elementary target STM (a given product of temporal and spectral modulations) embedded in other masking STMs. Analyzed with computational tools, our data show that both groups rely on a comparable linear (band-pass)-nonlinear processing cascade, which can be well accounted for by a temporal modulation filter bank model combined with cross-correlation against the target representation. Our results also suggest that the modulation mistuning observed for the hearing-impaired group results primarily from broader cochlear filters. Yet, we find idiosyncratic behaviors that cannot be captured by cochlear tuning alone, highlighting the need to consider variability originating from additional mechanisms. Overall, this integrated experimental-computational approach offers a principled way to assess suprathreshold processing distortions in each individual and could thus be used to further investigate interindividual differences in speech intelligibility.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Auditory Threshold , Hearing , Hearing Loss, Sensorineural/diagnosis , Humans , Noise/adverse effects , Perceptual Masking
6.
Neuroimage ; 227: 117586, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33346131

ABSTRACT

Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from native Mandarin speakers with varied English proficiency and from native English speakers while they listened to audio-stories in English. We measured the temporal response functions (TRFs) for acoustic, phonemic, phonotactic, and semantic features in individual participants and found a main effect of proficiency on linguistic encoding. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of "new" phonemic contrasts (i.e., English contrasts that do not exist in Mandarin) with increasing proficiency. Overall, we found that the nonnative listeners with higher proficiency levels had a linguistic feature representation more similar to that of native listeners, which enabled the accurate decoding of language proficiency. This result advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency.


Subject(s)
Brain/physiology , Comprehension/physiology , Multilingualism , Speech Perception/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Language , Male , Middle Aged , Phonetics , Young Adult
7.
Brain Struct Funct ; 225(5): 1643-1667, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32458050

ABSTRACT

Recent studies of the neurobiology of the dorsal frontal cortex (FC) of the ferret have illuminated its key role in the attention network, top-down cognitive control of sensory processing, and goal directed behavior. To elucidate the neuroanatomical regions of the dorsal FC, and delineate the boundary between premotor cortex (PMC) and dorsal prefrontal cortex (dPFC), we placed retrograde tracers in adult ferret dorsal FC anterior to primary motor cortex and analyzed thalamo-cortical connectivity. Cyto- and myeloarchitectural differences across dorsal FC and the distinctive projection patterns from thalamic nuclei, especially from the subnuclei of the medial dorsal (MD) nucleus and the ventral thalamic nuclear group, make it possible to clearly differentiate three separate dorsal FC fields anterior to primary motor cortex: polar dPFC (dPFCpol), dPFC, and PMC. Based on the thalamic connectivity, there is a striking similarity of the ferret's dorsal FC fields with other species. This possible homology opens up new questions for future comparative neuroanatomical and functional studies.


Subject(s)
Motor Cortex/cytology , Neurons/cytology , Prefrontal Cortex/cytology , Thalamic Nuclei/cytology , Animals , Female , Ferrets , Male , Neural Pathways/cytology , Neuroanatomical Tract-Tracing Techniques
8.
Curr Biol ; 30(9): 1649-1663.e5, 2020 05 04.
Article in English | MEDLINE | ID: mdl-32220317

ABSTRACT

Categorical perception is a fundamental cognitive function enabling animals to flexibly assign sounds into behaviorally relevant categories. This study investigates the nature of acoustic category representations, their emergence in an ascending series of ferret auditory and frontal cortical fields, and the dynamics of this representation during passive listening to task-relevant stimuli and during active retrieval from memory while engaging in learned categorization tasks. Ferrets were trained on two auditory Go-NoGo categorization tasks to discriminate two non-compact sound categories (composed of tones or amplitude-modulated noise). Neuronal responses became progressively more categorical in higher cortical fields, especially during task performance. The dynamics of the categorical responses exhibited a cascading top-down modulation pattern that began earliest in the frontal cortex and subsequently flowed downstream to the secondary auditory cortex, followed by the primary auditory cortex. In a subpopulation of neurons, categorical responses persisted even during the passive listening condition, demonstrating memory for task categories and their enhanced categorical boundaries.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Frontal Lobe/physiology , Sound , Acoustic Stimulation , Animals , Behavior, Animal , Female , Ferrets , Learning , Monitoring, Physiologic
9.
J Neurosci ; 39(44): 8664-8678, 2019 10 30.
Article in English | MEDLINE | ID: mdl-31519821

ABSTRACT

Natural sounds such as vocalizations often have covarying acoustic attributes, resulting in redundancy in neural coding. The efficient coding hypothesis proposes that sensory systems are able to detect such covariation and adapt to reduce redundancy, leading to more efficient neural coding. Recent psychoacoustic studies have shown the auditory system can rapidly adapt to efficiently encode two covarying dimensions as a single dimension, following passive exposure to sounds in which temporal and spectral attributes covaried in a correlated fashion. However, these studies observed a cost to this adaptation, which was a loss of sensitivity to the orthogonal dimension. Here we explore the neural basis of this psychophysical phenomenon by recording single-unit responses from the primary auditory cortex in awake ferrets exposed passively to stimuli with two correlated attributes, similar in stimulus design to the psychoacoustic experiments in humans. We found: (1) the signal-to-noise ratio of spike-rate coding of cortical responses driven by sounds with correlated attributes remained unchanged along the exposure dimension, but was reduced along the orthogonal dimension; (2) performance of a decoder trained with spike data to discriminate stimuli along the orthogonal dimension was equally reduced; (3) correlations between neurons tuned to the two covarying attributes decreased after exposure; and (4) these exposure effects still occurred if sounds were correlated along two acoustic dimensions, but varied randomly along a third dimension. These neurophysiological results are consistent with the efficient coding hypothesis and may help deepen our understanding of how the auditory system encodes and represents acoustic regularities and covariance.SIGNIFICANCE STATEMENT The efficient coding (EC) hypothesis (Attneave, 1954; Barlow, 1961) proposes that the neural code in sensory systems efficiently encodes natural stimuli by minimizing the number of spikes to transmit a sensory signal. Results of recent psychoacoustic studies in humans are consistent with the EC hypothesis in that, following passive exposure to stimuli with correlated attributes, the auditory system rapidly adapts so as to more efficiently encode the two covarying dimensions as a single dimension. In the current neurophysiological experiments, using a similar stimulus design and the experimental paradigm to the psychoacoustic studies of Stilp et al. (2010) and Stilp and Kluender (2011, 2012, 2016), we recorded responses from single neurons in the auditory cortex of the awake ferret, showing adaptive efficient neural coding of two correlated acoustic attributes.


Subject(s)
Adaptation, Physiological , Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/physiology , Acoustic Stimulation , Action Potentials , Animals , Female , Ferrets , Models, Neurological , Psychoacoustics
10.
Front Comput Neurosci ; 13: 28, 2019.
Article in English | MEDLINE | ID: mdl-31178710

ABSTRACT

Previous studies have shown that the auditory cortex can enhance the perception of behaviorally important sounds in the presence of background noise, but the mechanisms by which it does this are not yet elucidated. Rapid plasticity of spectrotemporal receptive fields (STRFs) in the primary (A1) cortical neurons is observed during behavioral tasks that require discrimination of particular sounds. This rapid task-related change is believed to be one of the processing strategies utilized by the auditory cortex to selectively attend to one stream of sound in the presence of mixed sounds. However, the mechanism by which the brain evokes this rapid plasticity in the auditory cortex remains unclear. This paper uses a neural network model to investigate how synaptic transmission within the cortical neuron network can change the receptive fields of individual neurons. A sound signal was used as input to a model of the cochlea and auditory periphery, which activated or inhibited integrate-and-fire neuron models to represent networks in the primary auditory cortex. Each neuron in the network was tuned to a different frequency. All neurons were interconnected with excitatory or inhibitory synapses of varying strengths. Action potentials in one of the model neurons were used to calculate the receptive field using reverse correlation. The results were directly compared to previously recorded electrophysiological data from ferrets performing behavioral tasks that require discrimination of particular sounds. The neural network model could reproduce complex STRFs observed experimentally through optimizing the synaptic weights in the model. The model predicts that altering synaptic drive between cortical neurons and/or bottom-up synaptic drive from the cochlear model to the cortical neurons can account for rapid task-related changes observed experimentally in A1 neurons. By identifying changes in the synaptic drive during behavioral tasks, the model provides insights into the neural mechanisms utilized by the auditory cortex to enhance the perception of behaviorally salient sounds.

11.
Nat Neurosci ; 22(3): 447-459, 2019 03.
Article in English | MEDLINE | ID: mdl-30692690

ABSTRACT

In higher sensory cortices, there is a gradual transformation from sensation to perception and action. In the auditory system, this transformation is revealed by responses in the rostral ventral posterior auditory field (VPr), a tertiary area in the ferret auditory cortex, which shows long-term learning in trained compared to naïve animals, arising from selectively enhanced responses to behaviorally relevant target stimuli. This enhanced representation is further amplified during active performance of spectral or temporal auditory discrimination tasks. VPr also shows sustained short-term memory activity after target stimulus offset, correlated with task response timing and action. These task-related changes in auditory filter properties enable VPr neurons to quickly and nimbly switch between different responses to the same acoustic stimuli, reflecting either spectrotemporal properties, timing, or behavioral meaning of the sound. Furthermore, they demonstrate an interaction between the dynamics of short-term attention and long-term learning, as incoming sound is selectively attended, recognized, and translated into action.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Discrimination, Psychological/physiology , Neurons/physiology , Acoustic Stimulation , Adaptation, Physiological , Animals , Behavior, Animal , Choice Behavior , Female , Ferrets
12.
Sci Rep ; 8(1): 16375, 2018 11 06.
Article in English | MEDLINE | ID: mdl-30401927

ABSTRACT

Rapid task-related plasticity is a neural correlate of selective attention in primary auditory cortex (A1). Top-down feedback from higher-order cortex may drive task-related plasticity in A1, characterized by enhanced neural representation of behaviorally meaningful sounds during auditory task performance. Since intracortical connectivity is greater within A1 layers 2/3 (L2/3) than in layers 4-6 (L4-6), we hypothesized that enhanced representation of behaviorally meaningful sounds might be greater in A1 L2/3 than L4-6. To test this hypothesis and study the laminar profile of task-related plasticity, we trained 2 ferrets to detect pure tones while we recorded laminar activity across a 1.8 mm depth in A1. In each experiment we analyzed high-gamma local field potentials (LFPs) and multi-unit spiking in response to identical acoustic stimuli during both passive listening and active task performance. We found that neural responses to auditory targets were enhanced during task performance, and target enhancement was greater in L2/3 than in L4-6. Spectrotemporal receptive fields (STRFs) computed from both high-gamma LFPs and multi-unit spiking showed similar increases in auditory target selectivity, also greatest in L2/3. Our results suggest that activity within intracortical networks plays a key role in the underlying neural mechanisms of selective attention.


Subject(s)
Auditory Cortex/physiology , Nerve Net/physiology , Neuronal Plasticity , Animals , Female , Ferrets
13.
J Acoust Soc Am ; 144(4): 2424, 2018 10.
Article in English | MEDLINE | ID: mdl-30404514

ABSTRACT

The brain decomposes mixtures of sounds, such as competing talkers, into perceptual streams that can be attended to individually. Attention can enhance the cortical representation of streams, but it is unknown what acoustic features the enhancement reflects, or where in the auditory pathways attentional enhancement is first observed. Here, behavioral measures of streaming were combined with simultaneous low- and high-frequency envelope-following responses (EFR) that are thought to originate primarily from cortical and subcortical regions, respectively. Repeating triplets of harmonic complex tones were presented with alternating fundamental frequencies. The tones were filtered to contain either low-numbered spectrally resolved harmonics, or only high-numbered unresolved harmonics. The behavioral results confirmed that segregation can be based on either tonotopic or pitch cues. The EFR results revealed no effects of streaming or attention on subcortical responses. Cortical responses revealed attentional enhancement under conditions of streaming, but only when tonotopic cues were available, not when streaming was based only on pitch cues. The results suggest that the attentional modulation of phase-locked responses is dominated by tonotopically tuned cortical neurons that are insensitive to pitch or periodicity cues.


Subject(s)
Auditory Cortex/physiology , Pitch Perception , Attention , Female , Humans , Loudness Perception , Male , Sound , Young Adult
14.
J Neurosci ; 38(46): 9955-9966, 2018 11 14.
Article in English | MEDLINE | ID: mdl-30266740

ABSTRACT

Responses of auditory cortical neurons encode sound features of incoming acoustic stimuli and also are shaped by stimulus context and history. Previous studies of mammalian auditory cortex have reported a variable time course for such contextual effects ranging from milliseconds to minutes. However, in secondary auditory forebrain areas of songbirds, long-term stimulus-specific neuronal habituation to acoustic stimuli can persist for much longer periods of time, ranging from hours to days. Such long-term habituation in the songbird is a form of long-term auditory memory that requires gene expression. Although such long-term habituation has been demonstrated in avian auditory forebrain, this phenomenon has not previously been described in the mammalian auditory system. Utilizing a similar version of the avian habituation paradigm, we explored whether such long-term effects of stimulus history also occur in auditory cortex of a mammalian auditory generalist, the ferret. Following repetitive presentation of novel complex sounds, we observed significant response habituation in secondary auditory cortex, but not in primary auditory cortex. This long-term habituation appeared to be independent for each novel stimulus and often lasted for at least 20 min. These effects could not be explained by simple neuronal fatigue in the auditory pathway, because time-reversed sounds induced undiminished responses similar to those elicited by completely novel sounds. A parallel set of pupillometric response measurements in the ferret revealed long-term habituation effects similar to observed long-term neural habituation, supporting the hypothesis that habituation to passively presented stimuli is correlated with implicit learning and long-term recognition of familiar sounds.SIGNIFICANCE STATEMENT Long-term habituation in higher areas of songbird auditory forebrain is associated with gene expression and is correlated with recognition memory. Similar long-term auditory habituation in mammals has not been previously described. We studied such habituation in single neurons in the auditory cortex of awake ferrets that were passively listening to repeated presentations of various complex sounds. Responses exhibited long-lasting habituation (at least 20 min) in the secondary, but not primary auditory cortex. Habituation ceased when stimuli were played backward, despite having identical spectral content to the original sound. This long-term neural habituation correlated with similar habituation of ferret pupillary responses to repeated presentations of the same stimuli, suggesting that stimulus habituation is retained as a long-term behavioral memory.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Habituation, Psychophysiologic/physiology , Memory/physiology , Animals , Auditory Pathways/physiology , Female , Ferrets
15.
Proc Natl Acad Sci U S A ; 115(17): E3869-E3878, 2018 04 24.
Article in English | MEDLINE | ID: mdl-29632213

ABSTRACT

Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.


Subject(s)
Auditory Cortex/physiology , Calcium Signaling/physiology , Calcium/metabolism , Models, Neurological , Nerve Net/physiology , Animals , Auditory Cortex/diagnostic imaging , Mice , Nerve Net/diagnostic imaging
16.
Cereb Cortex ; 28(3): 868-879, 2018 03 01.
Article in English | MEDLINE | ID: mdl-28069762

ABSTRACT

Sensory environments change over a wide dynamic range and sensory processing can change rapidly to facilitate stable perception. While rapid changes may occur throughout the sensory processing pathway, cortical changes are believed to profoundly influence perception. Prior stimulation studies showed that orbitofrontal cortex (OFC) can modify receptive fields and sensory coding in A1, but the engagement of OFC during listening and the pathways mediating OFC influences on A1 are unknown. We show in mice that OFC neurons respond to sounds consistent with a role of OFC in audition. We then show in vitro that OFC axons are present in A1 and excite pyramidal and GABAergic cells in all layers of A1 via glutamatergic synapses. Optogenetic stimulation of OFC terminals in A1 in vivo evokes short-latency neural activity in A1 and pairing activation of OFC projections in A1 with sounds alters sound-evoked A1 responses. Together, our results identify a direct connection from OFC to A1 that can excite A1 neurons at the earliest stage of cortical processing, and thereby sculpt A1 receptive fields. These results are consistent with a role for OFC in adjusting to changing behavioral relevance of sensory inputs and modulating A1 receptive fields to enhance sound processing.


Subject(s)
Auditory Cortex/cytology , Nerve Net/physiology , Neurons/physiology , Prefrontal Cortex/cytology , Sound , Acoustic Stimulation , Action Potentials/physiology , Animals , Auditory Perception , Axons/physiology , Channelrhodopsins/genetics , Channelrhodopsins/metabolism , Evoked Potentials/physiology , Excitatory Postsynaptic Potentials , Female , Glutamate Decarboxylase/genetics , Glutamate Decarboxylase/metabolism , Luminescent Proteins/genetics , Luminescent Proteins/metabolism , Male , Mice , Mice, Inbred C57BL , Reaction Time/physiology
17.
Article in English | MEDLINE | ID: mdl-28044024

ABSTRACT

This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutsch's 'octave illusion', in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high-low-high … and a low-high-low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation.This article is part of the themed issue 'Auditory and visual scene analysis'.


Subject(s)
Auditory Cortex/physiology , Auditory Perception , Hearing , Illusions , Acoustic Stimulation , Adult , Electroencephalography , Female , Humans , Male , Psychophysics , Young Adult
18.
Nat Commun ; 8: 13900, 2017 01 05.
Article in English | MEDLINE | ID: mdl-28054545

ABSTRACT

Perception of segregated sources is essential in navigating cluttered acoustic environments. A basic mechanism to implement this process is the temporal coherence principle. It postulates that a signal is perceived as emitted from a single source only when all of its features are temporally modulated coherently, causing them to bind perceptually. Here we report on neural correlates of this process as rapidly reshaped interactions in primary auditory cortex, measured in three different ways: as changes in response rates, as adaptations of spectrotemporal receptive fields following stimulation by temporally coherent and incoherent tone sequences, and as changes in spiking correlations during the tone sequences. Responses, sensitivity and presumed connectivity were rapidly enhanced by synchronous stimuli, and suppressed by alternating (asynchronous) sounds, but only when the animals engaged in task performance and were attentive to the stimuli. Temporal coherence and attention are therefore both important factors in auditory scene analysis.


Subject(s)
Auditory Cortex/cytology , Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/physiology , Acoustic Stimulation , Action Potentials , Adaptation, Physiological , Animals , Attention , Behavior, Animal , Female , Ferrets , Neuronal Plasticity , Task Performance and Analysis
19.
PLoS Comput Biol ; 12(7): e1005019, 2016 07.
Article in English | MEDLINE | ID: mdl-27398600

ABSTRACT

Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Task Performance and Analysis , Adult , Computational Biology , Humans , Noise , Young Adult
20.
J Neurophysiol ; 115(5): 2389-98, 2016 06 01.
Article in English | MEDLINE | ID: mdl-26912594

ABSTRACT

Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex.


Subject(s)
Auditory Cortex/physiology , Auditory Perception , Evoked Potentials, Auditory , Animals , Auditory Cortex/cytology , Ferrets , Neurons/physiology , Reaction Time , Sound
SELECTION OF CITATIONS
SEARCH DETAIL
...