Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain ; 138(Pt 9): 2750-65, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26070981

RESUMO

Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article.


Assuntos
Vias Auditivas/patologia , Encéfalo/patologia , Surdez/patologia , Surdez/fisiopatologia , Memória de Curto Prazo/fisiologia , Aprendizagem Espacial/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/irrigação sanguínea , Percepção Auditiva/fisiologia , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Estimulação Luminosa , Estatísticas não Paramétricas , Percepção Visual/fisiologia , Adulto Jovem
2.
Neuropsychologia ; 70: 58-63, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25676678

RESUMO

Several studies on action observation have shown that the biological dimension of movement modulates sensorimotor interactions in perception. In the present fMRI study, we tested the hypothesis that the biological dimension of sound modulates the involvement of the motor system in human auditory perception, using musical tasks. We first localized the vocal motor cortex in each participant. Then we compared the BOLD response to vocal, semi-vocal and non-vocal melody perception, and found greater activity for voice perception in the right sensorimotor cortex. We additionally ran a psychophysiological interaction analysis with the right sensorimotor as a seed, showing that the vocal dimension of the stimuli enhanced the connectivity between the seed region and other important nodes of the auditory dorsal stream. Finally, the participants' vocal ability was negatively correlated to the voice effect in the Inferior Parietal Lobule. These results suggest that the biological dimension of singing-voice impacts the activity within the auditory dorsal stream, probably via a facilitated matching between the perceived sound and the participant motor representations.


Assuntos
Córtex Motor/fisiologia , Percepção da Altura Sonora/fisiologia , Canto , Localização de Som/fisiologia , Voz , Estimulação Acústica , Adolescente , Adulto , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Córtex Motor/irrigação sanguínea , Oxigênio , Análise de Regressão , Adulto Jovem
3.
J Neurosci ; 35(4): 1411-22, 2015 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-25632119

RESUMO

Models propose an auditory-motor mapping via a left-hemispheric dorsal speech-processing stream, yet its detailed contributions to speech perception and production are unclear. Using fMRI-navigated repetitive transcranial magnetic stimulation (rTMS), we virtually lesioned left dorsal stream components in healthy human subjects and probed the consequences on speech-related facilitation of articulatory motor cortex (M1) excitability, as indexed by increases in motor-evoked potential (MEP) amplitude of a lip muscle, and on speech processing performance in phonological tests. Speech-related MEP facilitation was disrupted by rTMS of the posterior superior temporal sulcus (pSTS), the sylvian parieto-temporal region (SPT), and by double-knock-out but not individual lesioning of pars opercularis of the inferior frontal gyrus (pIFG) and the dorsal premotor cortex (dPMC), and not by rTMS of the ventral speech-processing stream or an occipital control site. RTMS of the dorsal stream but not of the ventral stream or the occipital control site caused deficits specifically in the processing of fast transients of the acoustic speech signal. Performance of syllable and pseudoword repetition correlated with speech-related MEP facilitation, and this relation was abolished with rTMS of pSTS, SPT, and pIFG. Findings provide direct evidence that auditory-motor mapping in the left dorsal stream causes reliable and specific speech-related MEP facilitation in left articulatory M1. The left dorsal stream targets the articulatory M1 through pSTS and SPT constituting essential posterior input regions and parallel via frontal pathways through pIFG and dPMC. Finally, engagement of the left dorsal stream is necessary for processing of fast transients in the auditory signal.


Assuntos
Vias Auditivas/fisiologia , Córtex Cerebral/fisiologia , Lateralidade Funcional , Fonética , Fala/fisiologia , Adulto , Vias Auditivas/irrigação sanguínea , Mapeamento Encefálico , Córtex Cerebral/irrigação sanguínea , Potencial Evocado Motor/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Lábio/inervação , Masculino , Modelos Neurológicos , Músculo Esquelético/fisiologia , Oxigênio/sangue , Estimulação Luminosa , Tempo de Reação , Percepção da Fala , Adulto Jovem
4.
Neurosci Biobehav Rev ; 37(10 Pt 2): 2847-55, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24161466

RESUMO

A current view proposes that the right inferior frontal cortex (IFC) is particularly responsible for attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Although some studies seem to support this view, an exhaustive review of all recent imaging studies points to an important functional role of both the right and the left IFC in processing vocal emotions. Second, besides a supposed predominant role of the IFC for an attentive processing and evaluation of emotional voices in IFC, these recent studies also point to a possible role of the IFC in preattentive and implicit processing of vocal emotions. The studies specifically provide evidence that both the right and the left IFC show a similar anterior-to-posterior gradient of functional activity in response to emotional vocalizations. This bilateral IFC gradient depends both on the nature or medium of emotional vocalizations (emotional prosody versus nonverbal expressions) and on the level of attentive processing (explicit versus implicit processing), closely resembling the distribution of terminal regions of distinct auditory pathways, which provide either global or dynamic acoustic information. Here we suggest a functional distribution in which several IFC subregions process different acoustic information conveyed by emotional vocalizations. Although the rostro-ventral IFC might categorize emotional vocalizations, the caudo-dorsal IFC might be specifically sensitive to their temporal features.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Lobo Frontal/fisiologia , Voz/fisiologia , Estimulação Acústica , Animais , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Lobo Frontal/irrigação sanguínea , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Oxigênio/sangue
5.
Schizophr Res ; 146(1-3): 314-9, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23453584

RESUMO

INTRODUCTION: Verbal auditory hallucinations (VAHs) are experienced as spoken voices which seem to originate in the extracorporeal environment or inside the head. Animal and human research has identified a 'where' pathway for sound processing comprising the planum temporale, the middle frontal gyrus and the inferior parietal lobule. We hypothesize that increased activity of that 'where' pathway mediates the exteriorization of VAHs. METHODS: The fMRI scans of 52 right-handed psychotic patients experiencing frequent VAHs were compared with the reported location of hallucinations, as rated with the aid of the PSYRATS-AHRS. For each subject, a unique VAH activation model was created based on the VAH timings, and subsequently convolved with a gamma function to model the hemodynamic response. In order to examine the neurofunctional equivalents of perceived VAH location, second-level group effects of subjects experiencing either internal (n = 24) or external (n = 28) VAHs were contrasted within planum temporale, middle frontal gyrus, and inferior parietal lobule regions of interest (ROIs). RESULTS: Three ROIs were tested for increased activity in relation with the exteriorization of VAHs. The analysis revealed a left-sided medial planum temporale and a right-sided middle frontal gyrus cluster of increased activity. No significant activity was found in the inferior parietal lobule. CONCLUSIONS: Our study indicates that internal and external VAHs are mediated by a fronto-temporal pattern of neuronal activity while the exteriorization of VAHs stems from additional brain activity in the auditory 'where' pathway, comprising the planum temporale and prefrontal regions.


Assuntos
Vias Auditivas/fisiopatologia , Percepção Auditiva/fisiologia , Lobo Frontal/fisiopatologia , Alucinações/patologia , Estimulação Acústica , Adulto , Vias Auditivas/irrigação sanguínea , Mapeamento Encefálico , Feminino , Lobo Frontal/irrigação sanguínea , Lateralidade Funcional/fisiologia , Alucinações/etiologia , Alucinações/psicologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Transtornos Psicóticos/complicações , Voz
6.
J Neurosci ; 33(10): 4339-48, 2013 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-23467350

RESUMO

The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Retroalimentação Sensorial/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/irrigação sanguínea , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Tempo de Reação/fisiologia , Adulto Jovem
7.
J Physiol Paris ; 107(3): 156-69, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22960664

RESUMO

Songbirds provide an excellent model system exhibiting vocal learning associated with an extreme brain plasticity linked to quantifiable behavioral changes. This animal model has thus far been intensively studied using electrophysiological, histological and molecular mapping techniques. However, these approaches do not provide a global view of the brain and/or do not allow repeated measures, which are necessary to establish correlations between alterations in neural substrate and behavior. In contrast, functional Magnetic Resonance Imaging (fMRI) is a non-invasive in vivo technique which allows one (i) to study brain function in the same subject over time, and (ii) to address the entire brain at once. During the last decades, fMRI has become one of the most popular neuroimaging techniques in cognitive neuroscience for the study of brain activity during various tasks ranging from simple sensory-motor to highly cognitive tasks. By alternating various stimulation periods with resting periods during scanning, resting and task-specific regional brain activity can be determined with this technique. Despite its obvious benefits, fMRI has, until now, only been sparsely used to study cognition in non-human species such as songbirds. The Bio-Imaging Lab (University of Antwerp, Belgium) was the first to implement Blood Oxygen Level Dependent (BOLD) fMRI in songbirds - and in particular zebra finches - for the visualization of sound perception and processing in auditory and song control brain regions. The present article provides an overview of the establishment and optimization of this technique in our laboratory and of the resulting scientific findings. The introduction of fMRI in songbirds has opened new research avenues that permit experimental analysis of complex sensorimotor and cognitive processes underlying vocal communication in this animal model.


Assuntos
Vias Auditivas/irrigação sanguínea , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Imageamento por Ressonância Magnética , Aves Canoras/fisiologia , Estimulação Acústica , Animais , Mapeamento Encefálico , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/instrumentação , Imageamento por Ressonância Magnética/métodos , Oxigênio/sangue
8.
J Cogn Neurosci ; 25(5): 730-42, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23249352

RESUMO

Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção da Altura Sonora/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicofísica , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
9.
J Neurosci ; 32(12): 4260-70, 2012 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-22442088

RESUMO

We compared brain structure and function in two subgroups of 21 stroke patients with either moderate or severe chronic speech comprehension impairment. Both groups had damage to the supratemporal plane; however, the severe group suffered greater damage to two unimodal auditory areas: primary auditory cortex and the planum temporale. The effects of this damage were investigated using fMRI while patients listened to speech and speech-like sounds. Pronounced changes in connectivity were found in both groups in undamaged parts of the auditory hierarchy. Compared to controls, moderate patients had significantly stronger feedback connections from planum temporale to primary auditory cortex bilaterally, while in severe patients this connection was significantly weaker in the undamaged right hemisphere. This suggests that predictive feedback mechanisms compensate in moderately affected patients but not in severely affected patients. The key pathomechanism in humans with persistent speech comprehension impairments may be impaired feedback connectivity to unimodal auditory areas.


Assuntos
Córtex Auditivo , Mapeamento Encefálico , Distúrbios da Fala/etiologia , Distúrbios da Fala/patologia , Percepção da Fala/fisiologia , Acidente Vascular Cerebral/complicações , Estimulação Acústica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Córtex Auditivo/irrigação sanguínea , Córtex Auditivo/patologia , Córtex Auditivo/fisiopatologia , Vias Auditivas/irrigação sanguínea , Vias Auditivas/patologia , Vias Auditivas/fisiopatologia , Compreensão , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Dinâmica não Linear , Oxigênio/sangue
10.
Cereb Cortex ; 22(1): 191-200, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21625012

RESUMO

We determined the location, functional response profile, and structural fiber connections of auditory areas with voice- and emotion-sensitive activity using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging. Bilateral regions responding to emotional voices were consistently found in the superior temporal gyrus, posterolateral to the primary auditory cortex. Event-related fMRI showed stronger responses in these areas to voices-expressing anger, sadness, joy, and relief, relative to voices with neutral prosody. Their neural responses were primarily driven by prosodic arousal, irrespective of valence. Probabilistic fiber tracking revealed direct structural connections of these "emotional voice areas" (EVA) with ipsilateral medial geniculate body, which is the major input source of early auditory cortex, as well as with the ipsilateral inferior frontal gyrus (IFG) and inferior parietal lobe (IPL). In addition, vocal emotions (compared with neutral prosody) increased the functional coupling of EVA with the ipsilateral IFG but not IPL. These results provide new insights into the neural architecture of the human voice processing system and support a crucial involvement of IFG in the recognition of vocal emotions, whereas IPL may subserve distinct auditory spatial functions, consistent with distinct anatomical substrates for the processing of "how" and "where" information within the auditory pathways.


Assuntos
Vias Auditivas/irrigação sanguínea , Mapeamento Encefálico , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Emoções/fisiologia , Voz/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Nível de Alerta , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Imagem de Tensor de Difusão , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Fibras Nervosas/fisiologia , Oxigênio/sangue , Adulto Jovem
11.
Cereb Cortex ; 22(4): 745-53, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21709174

RESUMO

Human neuroimaging studies have identified a region of auditory cortex, lateral Heschl's gyrus (HG), that shows a greater response to iterated ripple noise (IRN) than to a Gaussian noise control. Based in part on results using IRN as a pitch-evoking stimulus, it has been argued that lateral HG is a general "pitch center." However, IRN contains slowly varying spectrotemporal modulations, unrelated to pitch, that are not found in the control stimulus. Hence, it is possible that the cortical response to IRN is driven in part by these modulations. The current study reports the first attempt to control for these modulations. This was achieved using a novel type of stimulus that was generated by processing IRN to remove the fine temporal structure (and thus the pitch) but leave the slowly varying modulations. This "no-pitch IRN" stimulus is referred to as IRNo. Results showed a widespread response to the spectrotemporal modulations across auditory cortex. When IRN was contrasted with IRNo rather than with Gaussian noise, the apparent effect of pitch was no longer statistically significant. Our findings raise the possibility that a cortical response unrelated to pitch could previously have been errantly attributed to pitch coding.


Assuntos
Córtex Auditivo/irrigação sanguínea , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Discriminação Psicológica , Ruído , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Distribuição Normal , Oxigênio , Psicoacústica , Adulto Jovem
12.
Cereb Cortex ; 22(4): 838-53, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21709178

RESUMO

Lesion studies in monkeys have suggested a modest left hemisphere dominance for processing species-specific vocalizations, the neural basis of which has thus far remained unclear. We used contrast agent-enhanced functional magnetic resonance imaging to map the regions of the rhesus monkey brain involved in processing conspecific vocalizations as well as human speech and emotional sounds. Control conditions included scrambled versions of all 3 stimuli and silence. Compared with silence, all stimuli activated widespread parts of the auditory cortex and subcortical auditory structures with a right hemispheric bias at the level of the auditory core. However, comparing intact with scrambled sounds revealed a leftward bias in the auditory belt and the parabelt. The left-sided dominance was stronger and more robust for human speech than for rhesus vocalizations and hence does not reflect conspecific call selectivity but rather the processing of complex spectrotemporal patterns, such as those present in human speech and in some of the rhesus monkey vocalizations. This was confirmed by regressing brain activity with a model-derived parameter indexing the prevalence of such patterns. Our results indicate that processing of vocal sounds in the lateral belt and parabelt is asymmetric in monkeys, as predicted from lesion studies.


Assuntos
Mapeamento Encefálico , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Lateralidade Funcional/fisiologia , Vocalização Animal/fisiologia , Vigília , Estimulação Acústica/métodos , Análise de Variância , Animais , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Percepção Auditiva , Movimentos Oculares , Análise Fatorial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicoacústica , Som , Espectrografia do Som , Tentativa de Suicídio
13.
J Neurosci ; 31(1): 164-71, 2011 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-21209201

RESUMO

Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Detecção de Sinal Psicológico/fisiologia , Estimulação Acústica/métodos , Adulto , Vias Auditivas/irrigação sanguínea , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Psicoacústica
14.
J Neurosci ; 30(22): 7604-12, 2010 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-20519535

RESUMO

How the brain processes complex sounds, like voices or musical instrument sounds, is currently not well understood. The features comprising the acoustic profiles of such sounds are thought to be represented by neurons responding to increasing degrees of complexity throughout auditory cortex, with complete auditory "objects" encoded by neurons (or small networks of neurons) in anterior superior temporal regions. Although specialized voice and speech-sound regions have been proposed, it is unclear how other types of complex natural sounds are processed within this object-processing pathway. Using functional magnetic resonance imaging, we sought to demonstrate spatially distinct patterns of category-selective activity in human auditory cortex, independent of semantic content and low-level acoustic features. Category-selective responses were identified in anterior superior temporal regions, consisting of clusters selective for musical instrument sounds and for human speech. An additional subregion was identified that was particularly selective for the acoustic-phonetic content of speech. In contrast, regions along the superior temporal plane closer to primary auditory cortex were not selective for stimulus category, responding instead to specific acoustic features embedded in natural sounds, such as spectral structure and temporal modulation. Our results support a hierarchical organization of the anteroventral auditory-processing stream, with the most anterior regions representing the complete acoustic signature of auditory objects.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Psicoacústica , Som , Estimulação Acústica/métodos , Adulto , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Análise Espectral/métodos , Adulto Jovem
15.
Neuroimage ; 50(3): 1099-108, 2010 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-20053384

RESUMO

Non-human-primate fMRI is becoming increasingly recognised as the missing link between the widely applied methods of human imaging and intracortical animal electrophysiology. A crucial requirement for the optimal application of this method is the precise knowledge of the time course of the Blood Oxygenation Level Dependent (BOLD) signal. We mapped the BOLD signal time course in the inferior colliculus (IC), medial geniculate body (MGB) and in tonotopically defined fields in the auditory cortex of two macaques. The results show little differences in the BOLD-signal time courses within the auditory pathway. However, we observed systematic differences in the magnitude of the change in the BOLD signal with significantly stronger signal changes in field A1 of the auditory cortex compared to field R. The measured time course of the signal was in good agreement with similar studies in human auditory cortex but showed considerable differences to data reported from macaque visual cortex. Consistent with the studies in humans we measured a peak in the BOLD response around 4 s after the onset of 2-s broadband noise stimuli while previous studies recorded from the primary visual cortex of the same species reported the earliest peaks to short visual stimuli several seconds later. The comparison of our results with previous studies does not support differences in haemodynamic responses within the auditory system between human and non-human primates. Furthermore, the data will aid optimal design of future auditory fMRI studies in non-human primates.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Corpos Geniculados/fisiologia , Colículos Inferiores/fisiologia , Imageamento por Ressonância Magnética/métodos , Oxigênio/sangue , Estimulação Acústica , Animais , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Corpos Geniculados/irrigação sanguínea , Colículos Inferiores/irrigação sanguínea , Macaca , Masculino , Ruído , Fatores de Tempo
16.
Neuropsychologia ; 48(2): 601-6, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19883670

RESUMO

Neuroimaging studies show that permanent peripheral lesions such as unilateral deafness cause functional reorganization in the auditory pathways. However, functional reorganization of the auditory pathways as a result of higher-level damage or abnormalities remains poorly investigated. A relatively recent behavioural study points to functional changes in the auditory pathways in some, but interestingly not in all, of the acallosal individuals that were tested. The present study uses fMRI to investigate auditory activities in both cerebral hemispheres in those same acallosal subjects in order to directly investigate the contributions of ipsilateral and contralateral functional pathways reorganization. Predictions were made that functional reorganization could be predicted from behavioural performance. As reported previously in a number of neuroimaging studies, results showed that in neurologically intact subjects, binaural stimulation induced balanced activities between both hemispheres, while monaural stimulation induced strong contralateral activities and weak ipsilateral activities. In accordance with behavioural predictions, some acallosal subjects showed patterns of auditory cortical activities that were similar to those observed in neurologically intact subjects while others showed functional reorganization of the auditory pathways. Essentially they showed a significant increase and a significant decrease of neural activities in the contralateral and/or ipsilateral pathways, respectively. These findings indicate that at least in some acallosal subjects, functional reorganization inside the auditory pathways does contribute to compensate for the absence of the corpus callosum.


Assuntos
Agenesia do Corpo Caloso , Corpo Caloso/fisiopatologia , Lateralidade Funcional/fisiologia , Localização de Som/fisiologia , Lobo Temporal/patologia , Estimulação Acústica/métodos , Adulto , Vias Auditivas/anormalidades , Vias Auditivas/irrigação sanguínea , Vias Auditivas/patologia , Mapeamento Encefálico , Estudos de Casos e Controles , Corpo Caloso/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Lobo Temporal/fisiopatologia
17.
Cereb Cortex ; 20(3): 583-90, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19574393

RESUMO

In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mascaramento Perceptivo/fisiologia , Estimulação Acústica/métodos , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Sinais (Psicologia) , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Oxigênio/sangue , Percepção Espacial/fisiologia , Fatores de Tempo , Adulto Jovem
18.
J Neurosci ; 29(42): 13410-7, 2009 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-19846728

RESUMO

Recent studies have shown that ongoing activity fluctuations influence trial-by-trial perception of identical stimuli. Some brain systems seem to bias toward better perceptual performance and others toward worse. We tested whether these observations generalize to another as of yet unassessed sensory modality, audition, and a nonspatial but memory-dependent paradigm. In a sparse event-related functional magnetic resonance imaging design, we investigated detection of auditory near-threshold stimuli as a function of prestimulus baseline activity in early auditory cortex as well as several distributed networks that were defined on the basis of resting state functional connectivity. In accord with previous studies, hits were associated with higher prestimulus activity in related early sensory cortex as well as in a system comprising anterior insula, anterior cingulate, and thalamus, which other studies have related to processing salience and maintaining task set. In contrast to previous studies, however, higher prestimulus activity in the so-called dorsal attention system of frontal and parietal cortex biased toward misses, whereas higher activity in the so-called default mode network that includes posterior cingulate and precuneus biased toward hits. These results contradict a simple dichotomic view on the function of these two latter brain systems where higher ongoing activity in the dorsal attention network would facilitate perceptual performance, and higher activity in the default mode network would deteriorate perceptual performance. Instead, we show that the way in which ongoing activity fluctuations impact on perception depends on the specific sensory (i.e., nonspatial) and cognitive (i.e., mnemonic) context that is relevant.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Mapeamento Encefálico , Detecção de Sinal Psicológico/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicoacústica , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
19.
J Neurosci ; 29(8): 2477-85, 2009 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-19244522

RESUMO

Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive "anticipatory imagery" at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in "training" frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Aprendizagem/fisiologia , Rememoração Mental/fisiologia , Som , Estimulação Acústica , Adulto , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Masculino , Música , Oxigênio/sangue , Adulto Jovem
20.
J Cogn Neurosci ; 21(7): 1255-68, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-18752404

RESUMO

We investigated the functional characteristics of brain regions implicated in processing of speech melody by presenting words spoken in either neutral or angry prosody during a functional magnetic resonance imaging experiment using a factorial habituation design. Subjects judged either affective prosody or word class for these vocal stimuli, which could be heard for either the first, second, or third time. Voice-sensitive temporal cortices, as well as the amygdala, insula, and mediodorsal thalami, reacted stronger to angry than to neutral prosody. These stimulus-driven effects were not influenced by the task, suggesting that these brain structures are automatically engaged during processing of emotional information in the voice and operate relatively independent of cognitive demands. By contrast, the right middle temporal gyrus and the bilateral orbito-frontal cortices (OFC) responded stronger during emotion than word classification, but were also sensitive to anger expressed by the voices, suggesting that some perceptual aspects of prosody are also encoded within these regions subserving explicit processing of vocal emotion. The bilateral OFC showed a selective modulation by emotion and repetition, with particularly pronounced responses to angry prosody during the first presentation only, indicating a critical role of the OFC in detection of vocal information that is both novel and behaviorally relevant. These results converge with previous findings obtained for angry faces and suggest a general involvement of the OFC for recognition of anger irrespective of the sensory modality. Taken together, our study reveals that different aspects of voice stimuli and perceptual demands modulate distinct areas involved in the processing of emotional prosody.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Emoções/fisiologia , Linguística , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Análise de Variância , Nível de Alerta/fisiologia , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Encéfalo/irrigação sanguínea , Feminino , Lateralidade Funcional/fisiologia , Habituação Psicofisiológica , Humanos , Processamento de Imagem Assistida por Computador/métodos , Julgamento/fisiologia , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Tempo de Reação/fisiologia , Voz , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...