Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 106
Filter
1.
bioRxiv ; 2024 May 29.
Article in English | MEDLINE | ID: mdl-38854125

ABSTRACT

Binding the attributes of a sensory source is necessary to perceive it as a unified entity, one that can be attended to and extracted from its surrounding scene. In auditory perception, this is the essence of the cocktail party problem in which a listener segregates one speaker from a mixture of voices, or a musical stream from simultaneous others. It is postulated that coherence of the temporal modulations of a source's features is necessary to bind them. The focus of this study is on the role of temporal-coherence in binding and segregation, and specifically as evidenced by the neural correlates of rapid plasticity that enhance cortical responses among synchronized neurons, while suppressing them among asynchronized ones. In a first experiment, we find that attention to a sound sequence rapidly binds it to other coherent sequences while suppressing nearby incoherent sequences, thus enhancing the contrast between the two groups. In a second experiment, a sequence of synchronized multi-tone complexes, embedded in a cloud of randomly dispersed background of desynchronized tones, perceptually and neurally pops-out after a fraction of a second highlighting the binding among its coherent tones against the incoherent background. These findings demonstrate the role of temporal-coherence in binding and segregation.

2.
Elife ; 122023 Nov 16.
Article in English | MEDLINE | ID: mdl-37970945

ABSTRACT

Grouping sets of sounds into relevant categories is an important cognitive ability that enables the association of stimuli with appropriate goal-directed behavioral responses. In perceptual tasks, the primary auditory cortex (A1) assumes a prominent role by concurrently encoding both sound sensory features and task-related variables. Here, we sought to explore the role of A1 in the initiation of sound categorization, shedding light on its involvement in this cognitive process. We trained ferrets to discriminate click trains of different rates in a Go/No-Go delayed categorization task and recorded neural activity during both active behavior and passive exposure to the same sounds. Purely categorical response components were extracted and analyzed separately from sensory responses to reveal their contributions to the overall population response throughout the trials. We found that categorical activity emerged during sound presentation in the population average and was present in both active behavioral and passive states. However, upon task engagement, categorical responses to the No-Go category became suppressed in the population code, leading to an asymmetrical representation of the Go stimuli relative to the No-Go sounds and pre-stimulus baseline. The population code underwent an abrupt change at stimulus offset, with sustained responses after the Go sounds during the delay period. Notably, the categorical responses observed during the stimulus period exhibited a significant correlation with those extracted from the delay epoch, suggesting an early involvement of A1 in stimulus categorization.


Subject(s)
Auditory Cortex , Auditory Perception , Animals , Auditory Perception/physiology , Auditory Cortex/physiology , Ferrets , Sound , Behavior, Animal/physiology , Acoustic Stimulation
3.
J Acoust Soc Am ; 153(1): 286, 2023 01.
Article in English | MEDLINE | ID: mdl-36732241

ABSTRACT

Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.


Subject(s)
Memory, Short-Term , Speech Perception , Adult , Humans , Speech , Individuality , Audiometry, Pure-Tone
4.
Cereb Cortex Commun ; 2(4): tgab060, 2021.
Article in English | MEDLINE | ID: mdl-34746791

ABSTRACT

Numerous studies have suggested that the perception of a target sound stream (or source) can only be segregated from a complex acoustic background mixture if the acoustic features underlying its perceptual attributes (e.g., pitch, location, and timbre) induce temporally modulated responses that are mutually correlated (or coherent), and that are uncorrelated (incoherent) from those of other sources in the mixture. This "temporal coherence" hypothesis asserts that attentive listening to one acoustic feature of a target enhances brain responses to that feature but would also concomitantly (1) induce mutually excitatory influences with other coherently responding neurons, thus enhancing (or binding) them all as they respond to the attended source; by contrast, (2) suppressive interactions are hypothesized to build up among neurons driven by temporally incoherent sound features, thus relatively reducing their activity. In this study, we report on EEG measurements in human subjects engaged in various sound segregation tasks that demonstrate rapid binding among the temporally coherent features of the attended source regardless of their identity (pure tone components, tone complexes, or noise), harmonic relationship, or frequency separation, thus confirming the key role temporal coherence plays in the analysis and organization of auditory scenes.

5.
Elife ; 102021 11 18.
Article in English | MEDLINE | ID: mdl-34792467

ABSTRACT

Little is known about how neural representations of natural sounds differ across species. For example, speech and music play a unique role in human hearing, yet it is unclear how auditory representations of speech and music differ between humans and other animals. Using functional ultrasound imaging, we measured responses in ferrets to a set of natural and spectrotemporally matched synthetic sounds previously tested in humans. Ferrets showed similar lower-level frequency and modulation tuning to that observed in humans. But while humans showed substantially larger responses to natural vs. synthetic speech and music in non-primary regions, ferret responses to natural and synthetic sounds were closely matched throughout primary and non-primary auditory cortex, even when tested with ferret vocalizations. This finding reveals that auditory representations in humans and ferrets diverge sharply at late stages of cortical processing, potentially driven by higher-order processing demands in speech and music.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Ferrets/physiology , Sound , Acoustic Stimulation , Animals , Humans
6.
J Neurosci ; 41(35): 7449-7460, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34341154

ABSTRACT

During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cerebral Cortex/physiology , Imagination/physiology , Motivation/physiology , Music/psychology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Learning/physiology , Male , Markov Chains , Occupations , Young Adult
7.
J Neurosci ; 41(35): 7435-7448, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34341155

ABSTRACT

Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Frontal Lobe/physiology , Imagination/physiology , Motivation/physiology , Music/psychology , Acoustic Stimulation , Adult , Electroencephalography , Electromyography , Evoked Potentials/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Markov Chains , Occupations , Symbolism , Young Adult
8.
Front Neurosci ; 15: 673401, 2021.
Article in English | MEDLINE | ID: mdl-34421512

ABSTRACT

Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRF env ). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.

9.
Hear Res ; 404: 108213, 2021 05.
Article in English | MEDLINE | ID: mdl-33662686

ABSTRACT

Musicians say that the pitches of tones with a frequency ratio of 2:1 (one octave) have a distinctive affinity, even if the tones do not have common spectral components. It has been suggested, however, that this affinity judgment has no biological basis and originates instead from an acculturation process ‒ the learning of musical rules unrelated to auditory physiology. We measured, in young amateur musicians, the perceptual detectability of octave mistunings for tones presented alternately (melodic condition) or simultaneously (harmonic condition). In the melodic condition, mistuning was detectable only by means of explicit pitch comparisons. In the harmonic condition, listeners could use a different and more efficient perceptual cue: in the absence of mistuning, the tones fused into a single sound percept; mistunings decreased fusion. Performance was globally better in the harmonic condition, in line with the hypothesis that listeners used a fusion cue in this condition; this hypothesis was also supported by results showing that an illusory simultaneity of the tones was much less advantageous than a real simultaneity. In the two conditions, mistuning detection was generally better for octave compressions than for octave stretchings. This asymmetry varied across listeners, but crucially the listener-specific asymmetries observed in the two conditions were highly correlated. Thus, the perception of the melodic octave appeared to be closely linked to the phenomenon of harmonic fusion. As harmonic fusion is thought to be determined by biological factors rather than factors related to musical culture or training, we argue that octave pitch affinity also has, at least in part, a biological basis.


Subject(s)
Music , Pitch Perception , Acoustic Stimulation , Judgment , Sound
10.
Trends Hear ; 25: 2331216520978029, 2021.
Article in English | MEDLINE | ID: mdl-33620023

ABSTRACT

Spectrotemporal modulations (STM) are essential features of speech signals that make them intelligible. While their encoding has been widely investigated in neurophysiology, we still lack a full understanding of how STMs are processed at the behavioral level and how cochlear hearing loss impacts this processing. Here, we introduce a novel methodological framework based on psychophysical reverse correlation deployed in the modulation space to characterize the mechanisms underlying STM detection in noise. We derive perceptual filters for young normal-hearing and older hearing-impaired individuals performing a detection task of an elementary target STM (a given product of temporal and spectral modulations) embedded in other masking STMs. Analyzed with computational tools, our data show that both groups rely on a comparable linear (band-pass)-nonlinear processing cascade, which can be well accounted for by a temporal modulation filter bank model combined with cross-correlation against the target representation. Our results also suggest that the modulation mistuning observed for the hearing-impaired group results primarily from broader cochlear filters. Yet, we find idiosyncratic behaviors that cannot be captured by cochlear tuning alone, highlighting the need to consider variability originating from additional mechanisms. Overall, this integrated experimental-computational approach offers a principled way to assess suprathreshold processing distortions in each individual and could thus be used to further investigate interindividual differences in speech intelligibility.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Auditory Threshold , Hearing , Hearing Loss, Sensorineural/diagnosis , Humans , Noise/adverse effects , Perceptual Masking
11.
Cereb Cortex Commun ; 2(1): tgaa091, 2021.
Article in English | MEDLINE | ID: mdl-33506209

ABSTRACT

Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.

12.
Neuroimage ; 227: 117586, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33346131

ABSTRACT

Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from native Mandarin speakers with varied English proficiency and from native English speakers while they listened to audio-stories in English. We measured the temporal response functions (TRFs) for acoustic, phonemic, phonotactic, and semantic features in individual participants and found a main effect of proficiency on linguistic encoding. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of "new" phonemic contrasts (i.e., English contrasts that do not exist in Mandarin) with increasing proficiency. Overall, we found that the nonnative listeners with higher proficiency levels had a linguistic feature representation more similar to that of native listeners, which enabled the accurate decoding of language proficiency. This result advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency.


Subject(s)
Brain/physiology , Comprehension/physiology , Multilingualism , Speech Perception/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Language , Male , Middle Aged , Phonetics , Young Adult
13.
Brain Struct Funct ; 225(5): 1643-1667, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32458050

ABSTRACT

Recent studies of the neurobiology of the dorsal frontal cortex (FC) of the ferret have illuminated its key role in the attention network, top-down cognitive control of sensory processing, and goal directed behavior. To elucidate the neuroanatomical regions of the dorsal FC, and delineate the boundary between premotor cortex (PMC) and dorsal prefrontal cortex (dPFC), we placed retrograde tracers in adult ferret dorsal FC anterior to primary motor cortex and analyzed thalamo-cortical connectivity. Cyto- and myeloarchitectural differences across dorsal FC and the distinctive projection patterns from thalamic nuclei, especially from the subnuclei of the medial dorsal (MD) nucleus and the ventral thalamic nuclear group, make it possible to clearly differentiate three separate dorsal FC fields anterior to primary motor cortex: polar dPFC (dPFCpol), dPFC, and PMC. Based on the thalamic connectivity, there is a striking similarity of the ferret's dorsal FC fields with other species. This possible homology opens up new questions for future comparative neuroanatomical and functional studies.


Subject(s)
Motor Cortex/cytology , Neurons/cytology , Prefrontal Cortex/cytology , Thalamic Nuclei/cytology , Animals , Female , Ferrets , Male , Neural Pathways/cytology , Neuroanatomical Tract-Tracing Techniques
14.
Curr Biol ; 30(9): 1649-1663.e5, 2020 05 04.
Article in English | MEDLINE | ID: mdl-32220317

ABSTRACT

Categorical perception is a fundamental cognitive function enabling animals to flexibly assign sounds into behaviorally relevant categories. This study investigates the nature of acoustic category representations, their emergence in an ascending series of ferret auditory and frontal cortical fields, and the dynamics of this representation during passive listening to task-relevant stimuli and during active retrieval from memory while engaging in learned categorization tasks. Ferrets were trained on two auditory Go-NoGo categorization tasks to discriminate two non-compact sound categories (composed of tones or amplitude-modulated noise). Neuronal responses became progressively more categorical in higher cortical fields, especially during task performance. The dynamics of the categorical responses exhibited a cascading top-down modulation pattern that began earliest in the frontal cortex and subsequently flowed downstream to the secondary auditory cortex, followed by the primary auditory cortex. In a subpopulation of neurons, categorical responses persisted even during the passive listening condition, demonstrating memory for task categories and their enhanced categorical boundaries.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Frontal Lobe/physiology , Sound , Acoustic Stimulation , Animals , Behavior, Animal , Female , Ferrets , Learning , Monitoring, Physiologic
15.
Elife ; 92020 03 03.
Article in English | MEDLINE | ID: mdl-32122465

ABSTRACT

Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.


Subject(s)
Auditory Perception/physiology , Music , Temporal Lobe/physiology , Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory/physiology , Humans , Reaction Time
16.
J Neurosci ; 39(44): 8664-8678, 2019 10 30.
Article in English | MEDLINE | ID: mdl-31519821

ABSTRACT

Natural sounds such as vocalizations often have covarying acoustic attributes, resulting in redundancy in neural coding. The efficient coding hypothesis proposes that sensory systems are able to detect such covariation and adapt to reduce redundancy, leading to more efficient neural coding. Recent psychoacoustic studies have shown the auditory system can rapidly adapt to efficiently encode two covarying dimensions as a single dimension, following passive exposure to sounds in which temporal and spectral attributes covaried in a correlated fashion. However, these studies observed a cost to this adaptation, which was a loss of sensitivity to the orthogonal dimension. Here we explore the neural basis of this psychophysical phenomenon by recording single-unit responses from the primary auditory cortex in awake ferrets exposed passively to stimuli with two correlated attributes, similar in stimulus design to the psychoacoustic experiments in humans. We found: (1) the signal-to-noise ratio of spike-rate coding of cortical responses driven by sounds with correlated attributes remained unchanged along the exposure dimension, but was reduced along the orthogonal dimension; (2) performance of a decoder trained with spike data to discriminate stimuli along the orthogonal dimension was equally reduced; (3) correlations between neurons tuned to the two covarying attributes decreased after exposure; and (4) these exposure effects still occurred if sounds were correlated along two acoustic dimensions, but varied randomly along a third dimension. These neurophysiological results are consistent with the efficient coding hypothesis and may help deepen our understanding of how the auditory system encodes and represents acoustic regularities and covariance.SIGNIFICANCE STATEMENT The efficient coding (EC) hypothesis (Attneave, 1954; Barlow, 1961) proposes that the neural code in sensory systems efficiently encodes natural stimuli by minimizing the number of spikes to transmit a sensory signal. Results of recent psychoacoustic studies in humans are consistent with the EC hypothesis in that, following passive exposure to stimuli with correlated attributes, the auditory system rapidly adapts so as to more efficiently encode the two covarying dimensions as a single dimension. In the current neurophysiological experiments, using a similar stimulus design and the experimental paradigm to the psychoacoustic studies of Stilp et al. (2010) and Stilp and Kluender (2011, 2012, 2016), we recorded responses from single neurons in the auditory cortex of the awake ferret, showing adaptive efficient neural coding of two correlated acoustic attributes.


Subject(s)
Adaptation, Physiological , Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/physiology , Acoustic Stimulation , Action Potentials , Animals , Female , Ferrets , Models, Neurological , Psychoacoustics
17.
Front Comput Neurosci ; 13: 28, 2019.
Article in English | MEDLINE | ID: mdl-31178710

ABSTRACT

Previous studies have shown that the auditory cortex can enhance the perception of behaviorally important sounds in the presence of background noise, but the mechanisms by which it does this are not yet elucidated. Rapid plasticity of spectrotemporal receptive fields (STRFs) in the primary (A1) cortical neurons is observed during behavioral tasks that require discrimination of particular sounds. This rapid task-related change is believed to be one of the processing strategies utilized by the auditory cortex to selectively attend to one stream of sound in the presence of mixed sounds. However, the mechanism by which the brain evokes this rapid plasticity in the auditory cortex remains unclear. This paper uses a neural network model to investigate how synaptic transmission within the cortical neuron network can change the receptive fields of individual neurons. A sound signal was used as input to a model of the cochlea and auditory periphery, which activated or inhibited integrate-and-fire neuron models to represent networks in the primary auditory cortex. Each neuron in the network was tuned to a different frequency. All neurons were interconnected with excitatory or inhibitory synapses of varying strengths. Action potentials in one of the model neurons were used to calculate the receptive field using reverse correlation. The results were directly compared to previously recorded electrophysiological data from ferrets performing behavioral tasks that require discrimination of particular sounds. The neural network model could reproduce complex STRFs observed experimentally through optimizing the synaptic weights in the model. The model predicts that altering synaptic drive between cortical neurons and/or bottom-up synaptic drive from the cochlear model to the cortical neurons can account for rapid task-related changes observed experimentally in A1 neurons. By identifying changes in the synaptic drive during behavioral tasks, the model provides insights into the neural mechanisms utilized by the auditory cortex to enhance the perception of behaviorally salient sounds.

18.
Hear Res ; 377: 109-121, 2019 06.
Article in English | MEDLINE | ID: mdl-30927686

ABSTRACT

The relative importance of neural temporal and place coding in auditory perception is still a matter of much debate. The current article is a compilation of viewpoints from leading auditory psychophysicists and physiologists regarding the upper frequency limit for the use of neural phase locking to code temporal fine structure in humans. While phase locking is used for binaural processing up to about 1500 Hz, there is disagreement regarding the use of monaural phase-locking information at higher frequencies. Estimates of the general upper limit proposed by the contributors range from 1500 to 10000 Hz. The arguments depend on whether or not phase locking is needed to explain psychophysical discrimination performance at frequencies above 1500 Hz, and whether or not the phase-locked neural representation is sufficiently robust at these frequencies to provide useable information. The contributors suggest key experiments that may help to resolve this issue, and experimental findings that may cause them to change their minds. This issue is of crucial importance to our understanding of the neural basis of auditory perception in general, and of pitch perception in particular.


Subject(s)
Cochlear Nerve/physiology , Cues , Pitch Perception , Time Perception , Acoustic Stimulation , Humans , Motion , Periodicity , Pressure , Psychoacoustics , Sound
19.
J Acoust Soc Am ; 145(2): 615, 2019 02.
Article in English | MEDLINE | ID: mdl-30823787

ABSTRACT

Pitch is a fundamental attribute in auditory perception involved in source identification and segregation, music, and speech understanding. Pitch percepts are intimately related to harmonic resolvability of sound. When harmonics are well-resolved, the induced pitch is usually salient and precise, and several models relying on autocorrelations or harmonic spectral templates can account for these percepts. However, when harmonics are not completely resolved, the pitch percept becomes less salient, poorly discriminated, with upper range limited to a few hundred hertz, and spectral templates fail to convey percept since only temporal cues are available. Here, a biologically-motivated model is presented that combines spectral and temporal cues to account for both percepts. The model explains how temporal analysis to estimate the pitch of the unresolved harmonics is performed by bandpass filters implemented by resonances in dendritic trees of neurons in the early auditory pathway. It is demonstrated that organizing and exploiting such dendritic tuning can occur spontaneously in response to white noise. This paper then shows how temporal cues of unresolved harmonics may be integrated with spectrally resolved harmonics, creating spectro-temporal harmonic templates for all pitch percepts. Finally, the model extends its account of monaural pitch percepts to pitches evoked by dichotic binaural stimuli.


Subject(s)
Acoustic Stimulation , Auditory Perception/physiology , Models, Biological , Pitch Perception/physiology , Algorithms , Auditory Pathways , Cues , Humans , Sound Spectrography
20.
Nat Neurosci ; 22(3): 447-459, 2019 03.
Article in English | MEDLINE | ID: mdl-30692690

ABSTRACT

In higher sensory cortices, there is a gradual transformation from sensation to perception and action. In the auditory system, this transformation is revealed by responses in the rostral ventral posterior auditory field (VPr), a tertiary area in the ferret auditory cortex, which shows long-term learning in trained compared to naïve animals, arising from selectively enhanced responses to behaviorally relevant target stimuli. This enhanced representation is further amplified during active performance of spectral or temporal auditory discrimination tasks. VPr also shows sustained short-term memory activity after target stimulus offset, correlated with task response timing and action. These task-related changes in auditory filter properties enable VPr neurons to quickly and nimbly switch between different responses to the same acoustic stimuli, reflecting either spectrotemporal properties, timing, or behavioral meaning of the sound. Furthermore, they demonstrate an interaction between the dynamics of short-term attention and long-term learning, as incoming sound is selectively attended, recognized, and translated into action.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Discrimination, Psychological/physiology , Neurons/physiology , Acoustic Stimulation , Adaptation, Physiological , Animals , Behavior, Animal , Choice Behavior , Female , Ferrets
SELECTION OF CITATIONS
SEARCH DETAIL
...