Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Am Acad Audiol ; 32(8): 521-527, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34965598

RESUMO

BACKGROUND: Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important. PURPOSE: Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing. RESEARCH DESIGN: A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations). STUDY SAMPLE: Twenty adult subjects (10 for each group) with normal hearing were recruited. DATA COLLECTION AND ANALYSIS: Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200-7,000 Hz) and output (1,000-7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups. RESULTS: Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing. CONCLUSION: These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva , Percepção da Fala , Estimulação Acústica , Acústica , Adulto , Estimulação Elétrica , Audição , Humanos
2.
J Neurosci Methods ; 358: 109198, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33901568

RESUMO

BACKGROUND: Two challenges in auditory fMRI include the loud scanner noise during sound presentation and slow data acquisition. Here, we introduce a new auditory imaging protocol, termed "hybrid", that alleviates these obstacles. NEW METHOD: We designed a within-subject experiment (N = 14) wherein language-driven activity was measured by hybrid, interleaved silent (ISSS), and continuous multiband acquisition. To determine the advantage of noise attenuation during sound presentation, hybrid was compared to multiband. To identify the benefits of increased temporal resolution, hybrid was compared to ISSS. Data were evaluated by whole-brain univariate general linear modeling (GLM) and multivariate pattern analysis (MVPA). RESULTS: Comparison with existing methods: CONCLUSIONS: Our data revealed that hybrid imaging restored neural activity in the canonical language network that was absent due to the loud noise or slow sampling in the conventional imaging protocols. With its noise-attenuated sound presentation windows and increased acquisition speed, the hybrid protocol is well-suited for auditory fMRI research tracking neural activity pertaining to fast, time-varying acoustic events.


Assuntos
Córtex Auditivo , Imageamento por Ressonância Magnética , Estimulação Acústica , Córtex Auditivo/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Ruído
3.
Dev Psychol ; 56(9): 1632-1641, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32700950

RESUMO

Scholars debate whether musical and linguistic abilities are associated or independent. In the present study, we examined whether musical rhythm skills predict receptive grammar proficiency in childhood. In Experiment 1, 7- to 17-year-old children (N = 68) were tested on their grammar and rhythm abilities. In the grammar-comprehension task, children heard short sentences with subject-relative (e.g., "Boys that help girls are nice") or object-relative (e.g., "Boys that girls help are nice") clauses, and determined the gender of the individual performing the action. In the rhythm-discrimination test, children heard two short rhythmic sequences on each trial and decided if they were the same or different. Children with better performance on the rhythm task exhibited higher scores on the grammar test, even after holding constant age, gender, music training, and maternal education. In Experiment 2, we replicated this finding with another group of same-age children (N = 96) while further controlling for working memory. Our data reveal, for the first time, an association between receptive grammar and rhythm perception in typically developing children. This finding is consistent with the view that music and language share neural resources for rule-based temporal processing. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Música , Percepção do Tempo , Adolescente , Criança , Feminino , Humanos , Idioma , Linguística , Masculino , Instituições Acadêmicas
4.
Neuropsychologia ; 137: 107284, 2020 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-31783081

RESUMO

A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Música , Rede Nervosa/fisiologia , Psicolinguística , Percepção do Tempo/fisiologia , Córtex Cerebral/diagnóstico por imagem , Humanos , Rede Nervosa/diagnóstico por imagem
5.
R Soc Open Sci ; 5(8): 172208, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30224990

RESUMO

Vocal pitch is used as an important communicative device by humans, as found in the melodic dimension of both speech and song. Vocal pitch is determined by the degree of tension in the vocal folds of the larynx, which itself is influenced by complex and nonlinear interactions among the laryngeal muscles. The relationship between these muscles and vocal pitch has been described by a mathematical model in the form of a set of 'control rules'. We searched for the biological implementation of these control rules in the larynx motor cortex of the human brain. We scanned choral singers with functional magnetic resonance imaging as they produced discrete pitches at four different levels across their vocal range. While the locations of the larynx motor activations varied across singers, the activation peaks for the four pitch levels were highly consistent within each individual singer. This result was corroborated using multi-voxel pattern analysis, which demonstrated an absence of patterned activations differentiating any pairing of pitch levels. The complex and nonlinear relationships between the multiple laryngeal muscles that control vocal pitch may obscure the neural encoding of vocal pitch in the brain.

6.
eNeuro ; 5(3)2018.
Artigo em Inglês | MEDLINE | ID: mdl-29911176

RESUMO

In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18-41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.


Assuntos
Encéfalo/fisiologia , Compreensão , Audição , Percepção da Fala/fisiologia , Fala , Estimulação Acústica , Adolescente , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...