Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Commun Biol ; 6(1): 456, 2023 05 02.
Article in English | MEDLINE | ID: mdl-37130918

ABSTRACT

For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation). We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation. Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type. One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task. These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.


Subject(s)
Auditory Cortex , Vocalization, Animal , Animals , Guinea Pigs , Noise , Sound , Auditory Perception , Callithrix
2.
J Vis Exp ; (191)2023 01 06.
Article in English | MEDLINE | ID: mdl-36688548

ABSTRACT

Noise exposure is a leading cause of sensorineural hearing loss. Animal models of noise-induced hearing loss have generated mechanistic insight into the underlying anatomical and physiological pathologies of hearing loss. However, relating behavioral deficits observed in humans with hearing loss to behavioral deficits in animal models remains challenging. Here, pupillometry is proposed as a method that will enable the direct comparison of animal and human behavioral data. The method is based on a modified oddball paradigm - habituating the subject to the repeated presentation of a stimulus and intermittently presenting a deviant stimulus that varies in some parametric fashion from the repeated stimulus. The fundamental premise is that if the change between the repeated and deviant stimulus is detected by the subject, it will trigger a pupil dilation response that is larger than that elicited by the repeated stimulus. This approach is demonstrated using a vocalization categorization task in guinea pigs, an animal model widely used in auditory research, including in hearing loss studies. By presenting vocalizations from one vocalization category as standard stimuli and a second category as oddball stimuli embedded in noise at various signal-to-noise ratios, it is demonstrated that the magnitude of pupil dilation in response to the oddball category varies monotonically with the signal-to-noise ratio. Growth curve analyses can then be used to characterize the time course and statistical significance of these pupil dilation responses. In this protocol, detailed procedures for acclimating guinea pigs to the setup, conducting pupillometry, and evaluating/analyzing data are described. Although this technique is demonstrated in normal-hearing guinea pigs in this protocol, the method may be used to assess the sensory effects of various forms of hearing loss within each subject. These effects may then be correlated with concurrent electrophysiological measures and post-hoc anatomical observations.


Subject(s)
Hearing Loss, Sensorineural , Hearing Loss , Humans , Guinea Pigs , Animals , Noise , Sensation
3.
Hear Res ; 429: 108697, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36696724

ABSTRACT

To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.


Subject(s)
Auditory Cortex , Humans , Auditory Cortex/physiology , Acoustic Stimulation , Auditory Perception/physiology , Auditory Pathways/physiology , Neural Networks, Computer , Models, Neurological
4.
Elife ; 112022 10 13.
Article in English | MEDLINE | ID: mdl-36226815

ABSTRACT

Vocal animals produce multiple categories of calls with high between- and within-subject variability, over which listeners must generalize to accomplish call categorization. The behavioral strategies and neural mechanisms that support this ability to generalize are largely unexplored. We previously proposed a theoretical model that accomplished call categorization by detecting features of intermediate complexity that best contrasted each call category from all other categories. We further demonstrated that some neural responses in the primary auditory cortex were consistent with such a model. Here, we asked whether a feature-based model could predict call categorization behavior. We trained both the model and guinea pigs (GPs) on call categorization tasks using natural calls. We then tested categorization by the model and GPs using temporally and spectrally altered calls. Both the model and GPs were surprisingly resilient to temporal manipulations, but sensitive to moderate frequency shifts. Critically, the model predicted about 50% of the variance in GP behavior. By adopting different model training strategies and examining features that contributed to solving specific tasks, we could gain insight into possible strategies used by animals to categorize calls. Our results validate a model that uses the detection of intermediate-complexity contrastive features to accomplish call categorization.


Subject(s)
Auditory Cortex , Guinea Pigs , Animals , Auditory Cortex/physiology , Vocalization, Animal/physiology , Behavior, Animal/physiology , Auditory Perception/physiology , Acoustic Stimulation
5.
Hear Res ; 424: 108603, 2022 10.
Article in English | MEDLINE | ID: mdl-36099806

ABSTRACT

For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.


Subject(s)
Auditory Cortex , Acoustic Stimulation , Animals , Auditory Cortex/physiology , Chinchilla , Ferrets , Gerbillinae , Guinea Pigs , Hearing , Models, Animal , Neurons/physiology , Rabbits , Vocalization, Animal/physiology
6.
Hear Res ; 420: 108520, 2022 07.
Article in English | MEDLINE | ID: mdl-35617926

ABSTRACT

Acoustic overexposure can lead to decreased inhibition in auditory centers, including the inferior colliculus (IC), and has been implicated in the development of central auditory pathologies. While systemic drugs that increase GABAergic transmission have been shown to provide symptomatic relief, their side effect profiles impose an upper-limit on the dose and duration of use. A treatment that locally increases inhibition in auditory nuclei could mitigate these side effects. One such approach could be transplantation of inhibitory precursor neurons derived from the medial ganglionic eminence (MGE). The present study investigated whether transplanted MGE cells can survive and integrate into the IC of non-noise exposed and noise exposed mice. MGE cells were harvested on embryonic days 12-14 and injected bilaterally into the IC of adult mice, with or without previous noise exposure. At one-week post transplantation, MGE cells possessed small, elongated soma and bipolar processes, characteristic of migrating cells. By 5 weeks, MGE cells exhibited a more mature morphology, with multiple branching processes and axons with boutons that stain positive for the vesicular GABA transporter (VGAT). The MGE survival rate after 14 weeks post transplantation was 1.7% in non-noise exposed subjects. MGE survival rate was not significantly affected by noise exposure (1.2%). In both groups the vast majority of transplanted MGE cells (>97%) expressed the vesicular GABA transporter. Furthermore, electronmicroscopic analysis indicated that transplanted MGE cells formed synapses with and received synaptic endings from host IC neurons. Acoustic stimulation lead to a significant increase in the percentage of endogenous inhibitory cells that express c-fos but had no effect on the percentage of c-fos expressing transplanted MGE cells. MGE cells were observed in the IC up to 22 weeks post transplantation, the longest time point investigated, suggesting long term survival and integration. These data provide the first evidence that transplantation of MGE cells is viable in the IC and provides a new strategy to explore treatment options for central hearing dysfunction following noise exposure.


Subject(s)
Inferior Colliculi , Animals , Humans , Median Eminence , Mice , Neurons/physiology , Synapses/physiology
7.
Neurobiol Lang (Camb) ; 3(3): 441-468, 2022.
Article in English | MEDLINE | ID: mdl-36909931

ABSTRACT

Envelope and frequency-following responses (FFRENV and FFRTFS) are scalp-recorded electrophysiological potentials that closely follow the periodicity of complex sounds such as speech. These signals have been established as important biomarkers in speech and learning disorders. However, despite important advances, it has remained challenging to map altered FFRENV and FFRTFS to altered processing in specific brain regions. Here we explore the utility of a deconvolution approach based on the assumption that FFRENV and FFRTFS reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses). We tested the deconvolution method by applying it to FFRENV and FFRTFS of rhesus monkeys to human speech and click trains with time-varying pitch patterns. Our analyses show that F0ENV responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5 ms; 200-1000 Hz), midbrain (5-15 ms; 100-250 Hz), and cortex (15-35 ms; ~90 Hz). In contrast, F0TFS responses contained only one spectro-temporal component that likely reflected activity in the midbrain. In summary, our results support the notion that the latency of F0 components map meaningfully onto successive processing stages. This opens the possibility that pathologically altered FFRENV or FFRTFS may be linked to altered F0ENV or F0TFS and from there to specific processing stages and ultimately spatially targeted interventions.

8.
eNeuro ; 8(6)2021.
Article in English | MEDLINE | ID: mdl-34799409

ABSTRACT

Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Animals , Electroencephalography , Guinea Pigs , Phonetics , Pitch Perception
9.
PLoS Biol ; 19(6): e3001299, 2021 06.
Article in English | MEDLINE | ID: mdl-34133413

ABSTRACT

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.


Subject(s)
Auditory Cortex/physiology , Neurons/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Anesthesia , Animals , Female , Male , Models, Biological , Time Factors
10.
Sci Rep ; 11(1): 3108, 2021 02 04.
Article in English | MEDLINE | ID: mdl-33542266

ABSTRACT

Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.


Subject(s)
Audiometry, Evoked Response/methods , Auditory Threshold/physiology , Pupil/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Attention , Female , Guinea Pigs , Humans , Male , Models, Animal , Organ Size
11.
PLoS One ; 15(10): e0240535, 2020.
Article in English | MEDLINE | ID: mdl-33045028

ABSTRACT

Acute otitis media (AOM) is the main indication for pediatric antibiotic prescriptions, accounting for 25% of prescriptions. While the use of topical drops can minimize the administered dose of antibiotic and adverse systemic effects compared to oral antibiotics, their use has limitations, partially due to low patient compliance, high dosing frequency, and difficulty of administration. Lack of proper treatment can lead to development of chronic OM, which may require invasive interventions. Previous studies have shown that gel-based drug delivery to the ear is possible with intratympanic injection or chemical permeation enhancers (CPEs). However, many patients are reluctant to accept invasive treatments and CPEs have demonstrated toxicity to the tympanic membrane (TM). We developed a novel method of delivering therapeutics to the TM and middle ear using a topical, thermoresponsive gel depot containing antibiotic-loaded poly(lactic-co-glycolic acid) microspheres. Our in vitro and ex vivo results suggest that the sustained presentation can safely allow therapeutically relevant drug concentrations to penetrate the TM to the middle ear for up to 14 days. Animal results indicate sufficient antibiotic released for treatment from topical administration 24h after bacterial inoculation. However, animals treated 72h after inoculation, a more clinically relevant treatment practice, displayed spontaneous clearance of infection as is also often observed in the clinic. Despite this variability in the disease model, data suggest the system can safely treat bacterial infection, with future studies necessary to optimize microsphere formulations for scaled up dosage of antibiotic as well as further investigation of the influence of spontaneous bacterial clearance and of biofilm formation on effectiveness of treatment. To our knowledge, this study represents the first truly topical drug delivery system to the middle ear without the use of CPEs.


Subject(s)
Administration, Topical , Anti-Bacterial Agents/administration & dosage , Drug Carriers/administration & dosage , Otitis Media/drug therapy , Acute Disease , Animals , Ceftriaxone/administration & dosage , Chinchilla , Ciprofloxacin/administration & dosage , Delayed-Action Preparations/administration & dosage , Drug Compounding , Gels , Guinea Pigs , Microspheres
12.
Nat Commun ; 10(1): 1302, 2019 03 21.
Article in English | MEDLINE | ID: mdl-30899018

ABSTRACT

Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Callithrix/physiology , Neurons/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Auditory Cortex/anatomy & histology , Electrodes, Implanted , Female , Guinea Pigs , Humans , Male , Membrane Potentials/physiology , Neurons/cytology , Sound , Sound Spectrography/methods , Stereotaxic Techniques
13.
Elife ; 62017 04 04.
Article in English | MEDLINE | ID: mdl-28375078

ABSTRACT

The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.


Subject(s)
Brain/physiology , Facial Recognition , Animals , Brain/diagnostic imaging , Macaca fascicularis , Macaca mulatta , Magnetic Resonance Imaging , Male
14.
Sci Rep ; 5: 10950, 2015 Jun 19.
Article in English | MEDLINE | ID: mdl-26091254

ABSTRACT

Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.


Subject(s)
Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Perception/physiology , Magnetic Resonance Imaging , Vocalization, Animal/physiology , Animals , Callithrix , Radiography
15.
Neuron ; 74(5): 911-23, 2012 Jun 07.
Article in English | MEDLINE | ID: mdl-22681694

ABSTRACT

Contrast invariant orientation tuning in simple cells of the visual cortex depends critically on contrast dependent trial-to-trial variability in their membrane potential responses. This observation raises the question of whether this variability originates from within the cortical circuit or the feedforward inputs from the lateral geniculate nucleus (LGN). To distinguish between these two sources of variability, we first measured membrane potential responses while inactivating the surrounding cortex, and found that response variability was nearly unaffected. We then studied variability in the LGN, including contrast dependence, and the trial-to-trial correlation in responses between nearby neurons. Variability decreased significantly with contrast, whereas correlation changed little. When these experimentally measured parameters of variability were applied to a feedforward model of simple cells that included realistic mechanisms of synaptic integration, contrast-dependent, orientation independent variability emerged in the membrane potential responses. Analogous mechanisms might contribute to the stimulus dependence and propagation of variability throughout the neocortex.


Subject(s)
Action Potentials/physiology , Contrast Sensitivity/physiology , Evoked Potentials, Visual/physiology , Neurons/physiology , Orientation/physiology , Visual Cortex/physiology , Animals , Cats , Electric Stimulation/methods , Electroencephalography , Female , Models, Neurological , Patch-Clamp Techniques , Photic Stimulation/methods , Visual Cortex/cytology
16.
J Neurophysiol ; 106(2): 849-59, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21613589

ABSTRACT

The frequency resolution of neurons throughout the ascending auditory pathway is important for understanding how sounds are processed. In many animal studies, the frequency tuning widths are about 1/5th octave wide in auditory nerve fibers and much wider in auditory cortex neurons. Psychophysical studies show that humans are capable of discriminating far finer frequency differences. A recent study suggested that this is perhaps attributable to fine frequency tuning of neurons in human auditory cortex (Bitterman Y, Mukamel R, Malach R, Fried I, Nelken I. Nature 451: 197-201, 2008). We investigated whether such fine frequency tuning was restricted to human auditory cortex by examining the frequency tuning width in the awake common marmoset monkey. We show that 27% of neurons in the primary auditory cortex exhibit frequency tuning that is finer than the typical frequency tuning of the auditory nerve and substantially finer than previously reported cortical data obtained from anesthetized animals. Fine frequency tuning is also present in 76% of neurons of the auditory thalamus in awake marmosets. Frequency tuning was narrower during the sustained response compared to the onset response in auditory cortex neurons but not in thalamic neurons, suggesting that thalamocortical or intracortical dynamics shape time-dependent frequency tuning in cortex. These findings challenge the notion that the fine frequency tuning of auditory cortex is unique to human auditory cortex and that it is a de novo cortical property, suggesting that the broader tuning observed in previous animal studies may arise from the use of anesthesia during physiological recordings or from species differences.


Subject(s)
Acoustic Stimulation/methods , Action Potentials/physiology , Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Thalamus/physiology , Animals , Callithrix , Reaction Time/physiology
17.
J Neurosci ; 30(21): 7314-25, 2010 May 26.
Article in English | MEDLINE | ID: mdl-20505098

ABSTRACT

Recent studies have demonstrated the high selectivity of neurons in primary auditory cortex (A1) and a highly sparse representation of sounds by the population of A1 neurons in awake animals. However, the underlying receptive field structures that confer high selectivity on A1 neurons are poorly understood. The sharp tuning of A1 neurons' excitatory receptive fields (RFs) provides a partial explanation of the above properties. However, it remains unclear how inhibitory components of RFs contribute to the selectivity of A1 neurons observed in awake animals. To examine the role of the inhibition in sharpening stimulus selectivity, we have quantitatively analyzed stimulus-induced suppressive effects over populations of single neurons in frequency, amplitude, and time in A1 of awake marmosets. In addition to the well documented short-latency side-band suppression elicited by masking tones around the best frequency (BF) of a neuron, we uncovered long-latency suppressions caused by single-tone stimulation. Such long-latency suppressions also included monotonically increasing suppression with sound level both on-BF and off-BF, and persistent suppression lasting up to 100 ms after stimulus offset in a substantial proportion of A1 neurons. The extent of the suppression depended on the shape of a neuron's frequency-response area ("O" or "V" shaped). These findings suggest that the excitatory RF of A1 neurons is cocooned by wide-ranging inhibition that contributes to the high selectivity in A1 neurons' responses to complex stimuli. Population sparseness of the tone-responsive A1 neuron population may also be a consequence of this pervasive inhibition.


Subject(s)
Auditory Cortex/cytology , Neural Inhibition/physiology , Neurons/physiology , Pitch Perception/physiology , Wakefulness/physiology , Acoustic Stimulation/methods , Action Potentials/physiology , Animals , Brain Mapping , Callithrix , Echolocation/physiology , Models, Statistical , Psychoacoustics , Reaction Time/physiology
18.
J Neurosci ; 29(36): 11192-202, 2009 Sep 09.
Article in English | MEDLINE | ID: mdl-19741126

ABSTRACT

In the auditory cortex of awake animals, a substantial number of neurons do not respond to pure tones. These neurons have historically been classified as "unresponsive" and even been speculated as being nonauditory. We discovered, however, that many of these neurons in the primary auditory cortex (A1) of awake marmoset monkeys were in fact highly selective for complex sound features. We then investigated how such selectivity might arise from the tone-tuned inputs that these neurons likely receive. We found that these non-tone responsive neurons exhibited nonlinear combination-sensitive responses that require precise spectral and temporal combinations of two tone pips. The nonlinear spectrotemporal maps derived from these neurons were correlated with their selectivity for complex acoustic features. These non-tone responsive and nonlinear neurons were commonly encountered at superficial cortical depths in A1. Our findings demonstrate how temporally and spectrally specific nonlinear integration of putative tone-tuned inputs might underlie a diverse range of high selectivity of A1 neurons in awake animals. We propose that describing A1 neurons with complex response properties in terms of tone-tuned input channels can conceptually unify a wide variety of observed neural selectivity to complex sounds into a lower dimensional description.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Reaction Time/physiology , Sound , Animals , Brain Mapping/methods , Callithrix , Evoked Potentials, Auditory/physiology
19.
J Neurosci ; 28(13): 3415-26, 2008 Mar 26.
Article in English | MEDLINE | ID: mdl-18367608

ABSTRACT

A fundamental feature of auditory perception is the constancy of sound recognition over a large range of intensities. Although this invariance has been described in behavioral studies, the underlying neural mechanism is essentially unknown. Here we show a putative level-invariant representation of sounds by populations of neurons in primary auditory cortex (A1) that may provide a neural basis for the behavioral observations. Previous studies reported that pure-tone frequency tuning of most A1 neurons widens with increasing sound level. In sharp contrast, we found that a large proportion of neurons in A1 of awake marmosets were narrowly and separably tuned to both frequency and sound level. Tuning characteristics and firing rates of the neural population were preserved across all tested sound levels. These response properties lead to a level-invariant representation of sounds over the population of A1 neurons. Such a representation is an important step for robust feature recognition in natural environments.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/physiology , Recognition, Psychology/physiology , Sound , Acoustic Stimulation/methods , Animals , Auditory Cortex/cytology , Auditory Threshold/physiology , Callithrix , Computer Simulation , Dose-Response Relationship, Radiation , Models, Neurological , Statistics, Nonparametric , Wakefulness
SELECTION OF CITATIONS
SEARCH DETAIL
...