Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
J Vis Exp ; (191)2023 01 06.
Article in English | MEDLINE | ID: mdl-36688548

ABSTRACT

Noise exposure is a leading cause of sensorineural hearing loss. Animal models of noise-induced hearing loss have generated mechanistic insight into the underlying anatomical and physiological pathologies of hearing loss. However, relating behavioral deficits observed in humans with hearing loss to behavioral deficits in animal models remains challenging. Here, pupillometry is proposed as a method that will enable the direct comparison of animal and human behavioral data. The method is based on a modified oddball paradigm - habituating the subject to the repeated presentation of a stimulus and intermittently presenting a deviant stimulus that varies in some parametric fashion from the repeated stimulus. The fundamental premise is that if the change between the repeated and deviant stimulus is detected by the subject, it will trigger a pupil dilation response that is larger than that elicited by the repeated stimulus. This approach is demonstrated using a vocalization categorization task in guinea pigs, an animal model widely used in auditory research, including in hearing loss studies. By presenting vocalizations from one vocalization category as standard stimuli and a second category as oddball stimuli embedded in noise at various signal-to-noise ratios, it is demonstrated that the magnitude of pupil dilation in response to the oddball category varies monotonically with the signal-to-noise ratio. Growth curve analyses can then be used to characterize the time course and statistical significance of these pupil dilation responses. In this protocol, detailed procedures for acclimating guinea pigs to the setup, conducting pupillometry, and evaluating/analyzing data are described. Although this technique is demonstrated in normal-hearing guinea pigs in this protocol, the method may be used to assess the sensory effects of various forms of hearing loss within each subject. These effects may then be correlated with concurrent electrophysiological measures and post-hoc anatomical observations.


Subject(s)
Hearing Loss, Sensorineural , Hearing Loss , Humans , Guinea Pigs , Animals , Noise , Sensation
2.
Hear Res ; 424: 108603, 2022 10.
Article in English | MEDLINE | ID: mdl-36099806

ABSTRACT

For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.


Subject(s)
Auditory Cortex , Acoustic Stimulation , Animals , Auditory Cortex/physiology , Chinchilla , Ferrets , Gerbillinae , Guinea Pigs , Hearing , Models, Animal , Neurons/physiology , Rabbits , Vocalization, Animal/physiology
3.
PLoS Biol ; 19(6): e3001299, 2021 06.
Article in English | MEDLINE | ID: mdl-34133413

ABSTRACT

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.


Subject(s)
Auditory Cortex/physiology , Neurons/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Anesthesia , Animals , Female , Male , Models, Biological , Time Factors
4.
Sci Rep ; 11(1): 3108, 2021 02 04.
Article in English | MEDLINE | ID: mdl-33542266

ABSTRACT

Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.


Subject(s)
Audiometry, Evoked Response/methods , Auditory Threshold/physiology , Pupil/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Attention , Female , Guinea Pigs , Humans , Male , Models, Animal , Organ Size
5.
Nat Commun ; 10(1): 1302, 2019 03 21.
Article in English | MEDLINE | ID: mdl-30899018

ABSTRACT

Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Callithrix/physiology , Neurons/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Auditory Cortex/anatomy & histology , Electrodes, Implanted , Female , Guinea Pigs , Humans , Male , Membrane Potentials/physiology , Neurons/cytology , Sound , Sound Spectrography/methods , Stereotaxic Techniques
6.
Perception ; 45(4): 375-85, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26562878

ABSTRACT

Processing the spatial frequency components of an image is a crucial feature for visual perception, especially in recognition of faces. Here, we study the correlation between spatial frequency components of images of faces and neuronal activity in monkey amygdala while performing a visual recognition task. The frequency components of the images were analyzed using a fast Fourier transform for 40 spatial frequency ranges. We recorded 65 neurons showing statistically significant responses to at least one of the images used as a stimulus. A total of 37 of these neurons (n = 37) showed significant responses to at least three images, and in eight of them (8/37, 22%), we found a statistically significant correlation between neuron response and the modulus amplitude of at least one frequency range present in the images. Our results indicate that high spatial frequency and low spatial frequency components of images influence the activity of amygdala neurons.


Subject(s)
Amygdala/physiology , Facial Recognition/physiology , Neurons/physiology , Space Perception/physiology , Animals , Fourier Analysis , Macaca mulatta , Male , Pattern Recognition, Visual/physiology
7.
J Neurol Surg A Cent Eur Neurosurg ; 77(2): 118-29, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26444961

ABSTRACT

PURPOSE: To study visual perception in patients with anterior temporal lobectomy. MATERIALS AND METHODS: We explored some aspects of visual perception and compared the results obtained from 14 control subjects and 14 patients with unilateral anterior temporal lobectomy. Each group included 7 men and 7 women and the same age distribution (patients and controls: age range 27-48 years; mean 37 years). All subjects underwent a conventional ophthalmic examination and were tested for color perception, stereopsis, texture perception, face recognition, and visual illusions. To quantify color, stereoscopic, and texture perception they performed a visuomotor task that required a rapid response to a visual stimulus. Reaction times were measured under several conditions. RESULTS: Mild visual field defects involving the superior quadrant contralateral to the lobectomy were found in five patients; two other patients presented more severe defects. Lobectomized patients showed a lower number of correct trials than normal subjects when performing tasks involving color and texture perception. These patients also had longer reaction times for color, stereoscopic, and texture stimulus detection. Face recognition and perception of illusory images were preserved after unilateral anterior temporal lobectomy. CONCLUSION: Our data indicate that patients with anterior temporal lobectomy show moderate deficits in color, stereo, and texture perception, with no impairment in complex visual stimuli perception.


Subject(s)
Depth Perception/physiology , Illusions/physiology , Temporal Lobe/physiopathology , Visual Perception/physiology , Adult , Anterior Temporal Lobectomy , Color Perception/physiology , Female , Functional Laterality/physiology , Humans , Male , Middle Aged
8.
J Integr Neurosci ; 14(3): 309-23, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26246438

ABSTRACT

In this paper, we study the potential involvement of monkey amygdala in the evaluation of value encoding of visual and auditive stimuli associated with reward or no reward. We recorded the activity of 93 extracellular neurons from the monkey right amygdala, while performing a multisensory operant task. The activity of 78 task-related neurons was studied. Of these, 13 neurons (16%) responded to the value of visual stimuli, 22 neurons (28%) responded after the presentation of visual stimuli, 22 neurons (28%) showed an inhibition around the lever-pressing and were classified as action related neurons and 22 neurons (28%) responded after reward delivery. These findings suggest that neurons in the amygdala play a role in encoding value and processing visual information, participate in motor regulation and are sensitive to reward. The activity of these neurons did not change in the evaluation of auditive stimuli. These data support the hypothesis that amygdala neurons are specific to each sensory modality and that different groups of amygdala neurons process visual and auditive information.


Subject(s)
Amygdala/physiology , Auditory Perception/physiology , Conditioning, Operant/physiology , Neurons/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Acoustic Stimulation , Action Potentials , Animals , Macaca mulatta , Male , Microelectrodes , Neural Inhibition/physiology , Neuropsychological Tests , Photic Stimulation , Reward , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...