Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Neuroimage ; 271: 119987, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36940510

ABSTRACT

Tinnitus is a clinical condition where a sound is perceived without an external sound source. Homeostatic plasticity (HSP), serving to increase neural activity as compensation for the reduced input to the auditory pathway after hearing loss, has been proposed as a mechanism underlying tinnitus. In support, animal models of tinnitus show evidence of increased neural activity after hearing loss, including increased spontaneous and sound-driven firing rate, as well as increased neural noise throughout the auditory processing pathway. Bridging these findings to human tinnitus, however, has proven to be challenging. Here we implement hearing loss-induced HSP in a Wilson-Cowan Cortical Model of the auditory cortex to predict how homeostatic principles operating at the microscale translate to the meso- to macroscale accessible through human neuroimaging. We observed HSP-induced response changes in the model that were previously proposed as neural signatures of tinnitus, but that have also been reported as correlates of hearing loss and hyperacusis. As expected, HSP increased spontaneous and sound-driven responsiveness in hearing-loss affected frequency channels of the model. We furthermore observed evidence of increased neural noise and the appearance of spatiotemporal modulations in neural activity, which we discuss in light of recent human neuroimaging findings. Our computational model makes quantitative predictions that require experimental validation, and may thereby serve as the basis of future human studies of hearing loss, tinnitus, and hyperacusis.


Subject(s)
Auditory Cortex , Deafness , Hearing Loss , Tinnitus , Animals , Humans , Hyperacusis , Auditory Pathways , Acoustic Stimulation/methods
2.
Front Hum Neurosci ; 15: 642341, 2021.
Article in English | MEDLINE | ID: mdl-34526884

ABSTRACT

Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest - within this specific experimental setting - an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.

3.
Neuroimage ; 244: 118575, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34517127

ABSTRACT

Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.


Subject(s)
Auditory Cortex/physiology , Hemodynamics/physiology , Neurons/physiology , Bayes Theorem , Feedback, Physiological , Feedback, Psychological , Humans , Magnetic Resonance Imaging , Models, Neurological , Sensation , Sound , Temporal Lobe/physiology
4.
Front Comput Neurosci ; 13: 95, 2019.
Article in English | MEDLINE | ID: mdl-32038212

ABSTRACT

Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.

5.
Front Neurosci ; 8: 368, 2014.
Article in English | MEDLINE | ID: mdl-25429257

ABSTRACT

A wide variety of evidence, from neurophysiology, neuroanatomy, and imaging studies in humans and animals, suggests that human auditory cortex is in part tonotopically organized. Here we present a new means of resolving this spatial organization using a combination of non-invasive observables (EEG, MEG, and MRI), model-based estimates of spectrotemporal patterns of neural activation, and multivariate pattern analysis. The method exploits both the fine-grained temporal patterning of auditory cortical responses and the millisecond scale temporal resolution of EEG and MEG. Participants listened to 400 English words while MEG and scalp EEG were measured simultaneously. We estimated the location of cortical sources using the MRI anatomically constrained minimum norm estimate (MNE) procedure. We then combined a form of multivariate pattern analysis (representational similarity analysis) with a spatiotemporal searchlight approach to successfully decode information about patterns of neuronal frequency preference and selectivity in bilateral superior temporal cortex. Observed frequency preferences in and around Heschl's gyrus matched current proposals for the organization of tonotopic gradients in primary acoustic cortex, while the distribution of narrow frequency selectivity similarly matched results from the fMRI literature. The spatial maps generated by this novel combination of techniques seem comparable to those that have emerged from fMRI or ECOG studies, and a considerable advance over earlier MEG results.

SELECTION OF CITATIONS
SEARCH DETAIL
...