Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 141
Filtrar
1.
Front Nutr ; 10: 1173316, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37955018

RESUMO

Using ultra-high field (7 Tesla) functional MRI (fMRI), we conducted the first in-vivo functional neuroimaging study of the normal human brainstem specifically designed to examine neural signals in the Nucleus Tractus Solitarius (NTS) in response to all basic taste stimuli. NTS represents the first relay station along the mammalian taste processing pathway which originates at the taste buds in the oral cavity and passes through the thalamus before reaching the primary taste cortex in the brain. In our proof-of-concept study, we acquired data from one adult volunteer using fMRI at 1.2 mm isotropic resolution and performed a univariate general linear model analysis. During fMRI acquisition, three shuffled injections of sweet, bitter, salty, sour, and umami solutions were administered following an event-related design. We observed a statistically significant blood oxygen level-dependent (BOLD) response in the anatomically predicted location of the NTS for all five basic tastes. The results of this study appear statistically robust, even though they were obtained from a single volunteer. The information derived from a similar experimental strategy may inspire novel research aimed at clarifying important details of central nervous system involvement in eating disorders, at designing and monitoring tailored therapeutic strategies.

2.
PLoS One ; 18(10): e0284755, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37889894

RESUMO

Sounds following a cue or embedded in a periodic rhythm are processed more effectively than sounds that are part of an aperiodic rhythm. One might also expect that a sound embedded in a periodic rhythm is processed more effectively than a sound following a single temporal cue. Such a finding would follow the theory that the entrainment of neural rhythmic activity by periodic stimuli renders the prediction of upcoming stimuli more efficient. We conducted a psychophysical experiment in which we tested the behavioral elements of this idea. Targets in periodic and aperiodic rhythms, if they occurred, always appeared at the same moment in time, and thus were fully predictable. In a first condition, participants remained unaware of this. In a second condition, an explicit instruction on the temporal location of the targets embedded in rhythms was provided. We assessed sensitivity and reaction times to the target stimuli in a difficult temporal detection task, and contrasted performance in this task to that obtained for targets temporally cued by a single preceding cue. Irrespective of explicit information about target predictability, target detection performance was always better in the periodic and temporal cue conditions, compared to the aperiodic condition. However, we found that the mere predictability of an acoustic target within a periodic rhythm did not allow participants to detect the target any better than in a condition where the target's timing was predicted by a single temporal cue. Only when participants were made aware of the specific moment in the periodic rhythm where the target could occur, did sensitivity increase. This finding suggests that a periodic rhythm is not automatically sufficient to provide perceptual benefits compared to a condition predictable yet not rhythmic condition (a cue). In some conditions, as shown here, these benefits may only occur in interaction with other factors such as explicit instruction and directed attention.


Assuntos
Sinais (Psicologia) , Som , Humanos , Estimulação Acústica , Tempo de Reação , Atenção , Percepção Auditiva
3.
MAGMA ; 36(2): 211-225, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37036574

RESUMO

OBJECTIVE: We outline our vision for a 14 Tesla MR system. This comprises a novel whole-body magnet design utilizing high temperature superconductor; a console and associated electronic equipment; an optimized radiofrequency coil setup for proton measurement in the brain, which also has a local shim capability; and a high-performance gradient set. RESEARCH FIELDS: The 14 Tesla system can be considered a 'mesocope': a device capable of measuring on biologically relevant scales. In neuroscience the increased spatial resolution will anatomically resolve all layers of the cortex, cerebellum, subcortical structures, and inner nuclei. Spectroscopic imaging will simultaneously measure excitatory and inhibitory activity, characterizing the excitation/inhibition balance of neural circuits. In medical research (including brain disorders) we will visualize fine-grained patterns of structural abnormalities and relate these changes to functional and molecular changes. The significantly increased spectral resolution will make it possible to detect (dynamic changes in) individual metabolites associated with pathological pathways including molecular interactions and dynamic disease processes. CONCLUSIONS: The 14 Tesla system will offer new perspectives in neuroscience and fundamental research. We anticipate that this initiative will usher in a new era of ultra-high-field MR.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Imagem de Difusão por Ressonância Magnética , Ondas de Rádio
4.
MAGMA ; 36(2): 159-173, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37081247

RESUMO

The 9.4 T scanner in Maastricht is a whole-body magnet with head gradients and parallel RF transmit capability. At the time of the design, it was conceptualized to be one of the best fMRI scanners in the world, but it has also been used for anatomical and diffusion imaging. 9.4 T offers increases in sensitivity and contrast, but the technical ultra-high field (UHF) challenges, such as field inhomogeneities and constraints set by RF power deposition, are exacerbated compared to 7 T. This article reviews some of the 9.4 T work done in Maastricht. Functional imaging experiments included blood oxygenation level-dependent (BOLD) and blood-volume weighted (VASO) fMRI using different readouts. BOLD benefits from shorter T2* at 9.4 T while VASO from longer T1. We show examples of both ex vivo and in vivo anatomical imaging. For many applications, pTx and optimized coils are essential to harness the full potential of 9.4 T. Our experience shows that, while considerable effort was required compared to our 7 T scanner, we could obtain high-quality anatomical and functional data, which illustrates the potential of MR acquisitions at even higher field strengths. The practical challenges of working with a relatively unique system are also discussed.


Assuntos
Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos
5.
Nat Neurosci ; 26(4): 664-672, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36928634

RESUMO

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.


Assuntos
Córtex Auditivo , Semântica , Humanos , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Acústica , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos
6.
Curr Res Neurobiol ; 4: 100075, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36755988

RESUMO

In everyday life, the processing of acoustic information allows us to react to subtle changes in the auditory scene. Yet even when closely attending to sounds in the context of a task, we occasionally miss task-relevant features. The neural computations that underlie our ability to detect behavioral relevant sound changes are thought to be grounded in both feedforward and feedback processes within the auditory hierarchy. Here, we assessed the role of feedforward and feedback contributions in primary and non-primary auditory areas during behavioral detection of target sounds using submillimeter spatial resolution functional magnetic resonance imaging (fMRI) at high-fields (7 T) in humans. We demonstrate that the successful detection of subtle temporal shifts in target sounds leads to a selective increase of activation in superficial layers of primary auditory cortex (PAC). These results indicate that feedback signals reaching as far back as PAC may be relevant to the detection of targets in the auditory scene.

7.
Front Psychol ; 13: 964209, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36312201

RESUMO

Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems.

8.
Front Neurosci ; 15: 635937, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34630007

RESUMO

Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument's representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.

9.
Front Hum Neurosci ; 15: 642341, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34526884

RESUMO

Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest - within this specific experimental setting - an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.

10.
Neuroimage ; 244: 118575, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34517127

RESUMO

Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.


Assuntos
Córtex Auditivo/fisiologia , Hemodinâmica/fisiologia , Neurônios/fisiologia , Teorema de Bayes , Retroalimentação Fisiológica , Retroalimentação Psicológica , Humanos , Imageamento por Ressonância Magnética , Modelos Neurológicos , Sensação , Som , Lobo Temporal/fisiologia
11.
Neuroimage ; 238: 118145, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33961999

RESUMO

Multi-Voxel Pattern Analysis (MVPA) is a well established tool to disclose weak, distributed effects in brain activity patterns. The generalization ability is assessed by testing the learning model on new, unseen data. However, when limited data is available, the decoding success is estimated using cross-validation. There is general consensus on assessing statistical significance of cross-validated accuracy with non-parametric permutation tests. In this work we focus on the false positive control of different permutation strategies and on the statistical power of different cross-validation schemes. With simulations, we show that estimating the entire cross-validation error on each permuted dataset is the only statistically valid permutation strategy. Furthermore, using both simulations and real data from the HCP WU-Minn 3T fMRI dataset, we show that, among the different cross-validation schemes, a repeated split-half cross-validation is the most powerful, despite achieving slightly lower classification accuracy, when compared to other schemes. Our findings provide additional insights into the optimization of the experimental design for MVPA, highlighting the benefits of having many short runs.


Assuntos
Encéfalo/diagnóstico por imagem , Neuroimagem Funcional/métodos , Processamento de Imagem Assistida por Computador/métodos , Simulação por Computador , Humanos , Imageamento por Ressonância Magnética , Projetos de Pesquisa
12.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
13.
J Cogn Neurosci ; 32(11): 2145-2158, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32662723

RESUMO

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.


Assuntos
Fonética , Percepção da Fala , Percepção Auditiva/fisiologia , Humanos , Aprendizagem , Leitura Labial , Percepção da Fala/fisiologia
14.
PLoS One ; 15(6): e0234251, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32502187

RESUMO

Regularity of acoustic rhythms allows predicting a target embedded within a stream thereby improving detection performance and reaction times in spectral detection tasks. In two experiments we examine whether temporal regularity enhances perceptual sensitivity and reduces reaction times using a temporal shift detection task. Participants detected temporal shifts embedded at different positions within a sequence of quintet-sounds. Narrowband quintets were centered around carrier frequencies of 200 Hz, 1100 Hz, or 3100 Hz and presented at presentation rates between 1-8 Hz. We compared rhythmic sequences to control conditions where periodicity was reduced or absent and tested whether perceptual benefits depend on the presentation rate, the spectral content of the sounds, and task difficulty. We found that (1) the slowest rate (1 Hz) led to the largest behavioral effect on sensitivity. (2) This sensitivity improvement is carrier-dependent, such that the largest improvement is observed for low-frequency (200 Hz) carriers compared to 1100 Hz and 3100 Hz carriers. (3) Moreover, we show that the predictive value of a temporal cue and that of a temporal rhythm similarly affect perceptual sensitivity. That is, both the cue and the rhythm induce confident temporal expectancies in contrast to an aperiodic rhythm, and thereby allow to effectively prepare and allocate attentional resources in time. (4) Lastly, periodic stimulation reduces reaction times compared to aperiodic stimulation, both at perceptual threshold as well as above threshold. Similarly, a temporal cue allowed participants to optimally prepare and thereby respond fastest. Overall, our results are consistent with the hypothesis that periodicity leads to optimized predictions and processing of forthcoming input and thus to behavioral benefits. Predictable temporally cued sounds provide a similar perceptual benefit to periodic rhythms, despite an additional uncertainty of target position within periodic sequences. Several neural mechanisms may underlie our findings, including the entrainment of oscillatory activity of neural populations.


Assuntos
Sinais (Psicologia) , Som , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo
15.
Psychon Bull Rev ; 27(4): 707-715, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32319002

RESUMO

When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.


Assuntos
Adaptação Psicológica , Sinais (Psicologia) , Leitura Labial , Fonética , Percepção da Fala , Percepção Visual , Vocabulário , Adulto , Compreensão , Feminino , Humanos , Aprendizagem , Masculino , Tempo de Reação , Adulto Jovem
16.
Atten Percept Psychophys ; 82(4): 2018-2026, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31970708

RESUMO

To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.


Assuntos
Percepção Auditiva , Fonética , Percepção da Fala , Humanos , Idioma , Leitura Labial , Fala
17.
Neuroimage Clin ; 25: 102166, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31958686

RESUMO

Tinnitus is a clinical condition defined by hearing a sound in the absence of an objective source. Early experiments in animal models have suggested that tinnitus stems from an alteration of processing in the auditory system. However, translating these results to humans has proven challenging. One limiting factor has been the insufficient spatial resolution of non-invasive measurement techniques to investigate responses in subcortical auditory nuclei, like the inferior colliculus and the medial geniculate body (MGB). Here we employed ultra-high field functional magnetic resonance imaging (UHF-fMRI) at 7 Tesla to investigate the frequency-specific processing in sub-cortical and cortical regions in a cohort of six tinnitus patients and six hearing loss matched controls. We used task-based fMRI to perform tonotopic mapping and compared the magnitude and tuning of frequency-specific responses between the two groups. Additionally, we used resting-state fMRI to investigate the functional connectivity. Our results indicate frequency-unspecific reductions in the selectivity of frequency tuning that start at the level of the MGB and continue in the auditory cortex, as well as reduced thalamocortical and cortico-cortical connectivity with tinnitus. These findings suggest that tinnitus may be associated with reduced inhibition in the auditory pathway, potentially leading to increased neural noise and reduced functional connectivity. Moreover, these results indicate the relevance of high spatial resolution UHF-fMRI for the investigation of the role of sub-cortical auditory regions in tinnitus.


Assuntos
Córtex Auditivo/fisiopatologia , Vias Auditivas/fisiopatologia , Córtex Cerebral/fisiopatologia , Conectoma/métodos , Rede Nervosa/fisiopatologia , Tálamo/fisiopatologia , Zumbido/fisiopatologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Vias Auditivas/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Tálamo/diagnóstico por imagem , Zumbido/diagnóstico por imagem
18.
Cereb Cortex ; 30(3): 1103-1116, 2020 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-31504283

RESUMO

Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


Assuntos
Córtex Auditivo/fisiopatologia , Cegueira/fisiopatologia , Plasticidade Neuronal , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Cegueira/congênito , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Lobo Occipital/fisiologia , Pessoas com Deficiência Visual
19.
Nat Hum Behav ; 3(10): 1125, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31462763

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.The original and corrected figures are shown in the accompanying Publisher Correction.

20.
Nat Rev Neurosci ; 20(10): 609-623, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31467450

RESUMO

Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Localização de Som/fisiologia , Animais , Córtex Auditivo/diagnóstico por imagem , Vias Auditivas/diagnóstico por imagem , Audição/fisiologia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...