Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Nature ; 626(7999): 593-602, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38093008

ABSTRACT

Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.


Subject(s)
Auditory Cortex , Neurons , Speech Perception , Temporal Lobe , Humans , Acoustic Stimulation , Auditory Cortex/cytology , Auditory Cortex/physiology , Neurons/physiology , Phonetics , Speech , Speech Perception/physiology , Temporal Lobe/cytology , Temporal Lobe/physiology , Cues , Electrodes
2.
Softw Impacts ; 172023 Sep.
Article in English | MEDLINE | ID: mdl-37771949

ABSTRACT

Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field. In the interest of unifying the domain of auditory neuroscience, naplib-python provides an intuitive and general data structure for handling all neural recordings and stimuli, as well as extensive preprocessing, feature extraction, and analysis tools which operate on that data structure. The package removes many of the complications associated with this domain, such as varying trial durations and multi-modal stimuli, and provides a general-purpose analysis framework that interfaces easily with existing toolboxes used in the field.

3.
ArXiv ; 2023 Apr 04.
Article in English | MEDLINE | ID: mdl-37064534

ABSTRACT

Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field. In the interest of unifying the domain of auditory neuroscience, naplib-python provides an intuitive and general data structure for handling all neural recordings and stimuli, as well as extensive preprocessing, feature extraction, and analysis tools which operate on that data structure. The package removes many of the complications associated with this domain, such as varying trial durations and multi-modal stimuli, and provides a general-purpose analysis framework that interfaces easily with existing toolboxes used in the field.

4.
Neuroimage ; 266: 119819, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36529203

ABSTRACT

The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.


Subject(s)
Auditory Cortex , Humans , Acoustic Stimulation/methods , Auditory Perception , Neurons , Neural Networks, Computer
5.
Sci Rep ; 11(1): 517, 2021 01 12.
Article in English | MEDLINE | ID: mdl-33436776

ABSTRACT

The vestibular system is vital for maintaining balance and stabilizing gaze and vestibular damage causes impaired postural and gaze control. Here we examined the effects of vestibular loss and subsequent compensation on head motion kinematics during voluntary behavior. Head movements were measured in vestibular schwannoma patients before, and then 6 weeks and 6 months after surgical tumor removal, requiring sectioning of the involved vestibular nerve (vestibular neurectomy). Head movements were recorded in six dimensions using a small head-mounted sensor while patients performed the Functional Gait Assessment (FGA). Kinematic measures differed between patients (at all three time points) and normal subjects on several challenging FGA tasks, indicating that vestibular damage (caused by the tumor or neurectomy) alters head movements in a manner that is not normalized by central compensation. Kinematics measured at different time points relative to vestibular neurectomy differed substantially between pre-operative and 6-week post-operative states but changed little between 6-week and > 6-month post-operative states, demonstrating that compensation affecting head kinematics is relatively rapid. Our results indicate that quantifying head kinematics during self-generated gait tasks provides valuable information about vestibular damage and compensation, suggesting that early changes in patient head motion strategy may be maladaptive for long-term vestibular compensation.


Subject(s)
Denervation/adverse effects , Head/physiology , Movement , Neuroma, Acoustic/physiopathology , Neuroma, Acoustic/surgery , Otologic Surgical Procedures/methods , Peripheral Nervous System Neoplasms/physiopathology , Peripheral Nervous System Neoplasms/surgery , Vestibular Nerve/physiopathology , Vestibular Nerve/surgery , Vestibule, Labyrinth/innervation , Acute Disease , Chronic Disease , Denervation/methods , Gait/physiology , Humans , Otologic Surgical Procedures/adverse effects , Postural Balance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...