Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 32(11): 2145-2158, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32662723

RESUMO

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.


Assuntos
Fonética , Percepção da Fala , Percepção Auditiva/fisiologia , Humanos , Aprendizagem , Leitura Labial , Percepção da Fala/fisiologia
2.
Psychon Bull Rev ; 27(4): 707-715, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32319002

RESUMO

When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.


Assuntos
Adaptação Psicológica , Sinais (Psicologia) , Leitura Labial , Fonética , Percepção da Fala , Percepção Visual , Vocabulário , Adulto , Compreensão , Feminino , Humanos , Aprendizagem , Masculino , Tempo de Reação , Adulto Jovem
3.
Atten Percept Psychophys ; 82(4): 2018-2026, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31970708

RESUMO

To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.


Assuntos
Percepção Auditiva , Fonética , Percepção da Fala , Humanos , Idioma , Leitura Labial , Fala
4.
J Exp Psychol Learn Mem Cogn ; 46(1): 189-199, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30883166

RESUMO

Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound-picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in "speech mode" was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Psicolinguística , Fala/fisiologia , Aprendizagem Verbal/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
5.
Emotion ; 20(8): 1435-1445, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31478724

RESUMO

Are emotional expressions shaped by specialized innate mechanisms that guide learning, or do they develop exclusively from learning without innate preparedness? Here we test whether nonverbal affective vocalisations produced by bilaterally congenitally deaf adults contain emotional information that is recognisable to naive listeners. Because these deaf individuals have had no opportunity for auditory learning, the presence of such an association would imply that mappings between emotions and vocalizations are buffered against the absence of input that is typically important for their development and thus at least partly innate. We recorded nonverbal vocalizations expressing 9 emotions from 8 deaf individuals (435 tokens) and 8 matched hearing individuals (536 tokens). These vocalizations were submitted to an acoustic analysis and used in a recognition study in which naive listeners (n = 812) made forced-choice judgments. Our results show that naive listeners can reliably infer many emotional states from nonverbal vocalizations produced by deaf individuals. In particular, deaf vocalizations of fear, disgust, sadness, amusement, sensual pleasure, surprise, and relief were recognized at better-than-chance levels, whereas anger and achievement/triumph vocalizations were not. Differences were found on most acoustic features of the vocalizations produced by deaf as compared with hearing individuals. Our results suggest that there is an innate component to the associations between human emotions and vocalizations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
Sci Adv ; 5(9): eaax0262, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31555732

RESUMO

Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.


Assuntos
Aprendizagem/fisiologia , Imageamento por Ressonância Magnética , Estimulação Luminosa , Córtex Visual , Adulto , Feminino , Humanos , Masculino , Córtex Visual/diagnóstico por imagem , Córtex Visual/fisiologia
7.
Q J Exp Psychol (Hove) ; 72(10): 2371-2379, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30836818

RESUMO

Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers' responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.


Assuntos
Retroalimentação Sensorial/fisiologia , Atividade Motora/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
8.
Psychon Bull Rev ; 25(4): 1458-1467, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29869027

RESUMO

When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some responses follow the perturbation. In the present study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is made. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: The system initially responds by doing the opposite of what it was doing. This effect and the nontrivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production system's state at the time of perturbation.


Assuntos
Retroalimentação Sensorial/fisiologia , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
9.
Neuroimage ; 179: 326-336, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29936308

RESUMO

Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower ß bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and ß power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.


Assuntos
Córtex Cerebral/fisiologia , Retroalimentação Sensorial/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Percepção da Altura Sonora/fisiologia , Adulto Jovem
10.
J Acoust Soc Am ; 142(4): 2007, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092613

RESUMO

An important part of understanding speech motor control consists of capturing the interaction between speech production and speech perception. This study tests a prediction of theoretical frameworks that have tried to account for these interactions: If speech production targets are specified in auditory terms, individuals with better auditory acuity should have more precise speech targets, evidenced by decreased within-phoneme variability and increased between-phoneme distance. A study was carried out consisting of perception and production tasks in counterbalanced order. Auditory acuity was assessed using an adaptive speech discrimination task, while production variability was determined using a pseudo-word reading task. Analyses of the production data were carried out to quantify average within-phoneme variability, as well as average between-phoneme contrasts. Results show that individuals not only vary in their production and perceptual abilities, but that better discriminators have more distinctive vowel production targets-that is, targets with less within-phoneme variability and greater between-phoneme distances-confirming the initial hypothesis. This association between speech production and perception did not depend on local phoneme density in vowel space. This study suggests that better auditory acuity leads to more precise speech production targets, which may be a consequence of auditory feedback affecting speech production over time.


Assuntos
Fonética , Percepção da Fala , Fala/fisiologia , Feminino , Humanos , Masculino , Atividade Motora , Análise de Regressão , Adulto Jovem
11.
Sci Adv ; 3(5): e1602612, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28560333

RESUMO

Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults, we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and the thalamus. Individual rates of connectivity increase were significantly related to the individual decoding skill gains. These findings crucially complement current neurobiological concepts of normal and impaired literacy acquisition.


Assuntos
Córtex Visual/fisiologia , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Plasticidade Neuronal/fisiologia , Leitura
12.
Neuropsychologia ; 100: 51-63, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28400328

RESUMO

Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task.


Assuntos
Atenção/fisiologia , Comunicação , Lateralidade Funcional/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Fixação Ocular/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
14.
J Neurosci ; 33(26): 10688-97, 2013 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-23804092

RESUMO

Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.


Assuntos
Adaptação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Estimulação Acústica , Adulto , Compreensão/fisiologia , Discriminação Psicológica/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neostriado/fisiologia , Rede Nervosa/fisiologia , Ruído , Oxigênio/sangue , Desempenho Psicomotor/fisiologia , Percepção da Fala/fisiologia , Medida da Produção da Fala , Tálamo/fisiologia , Adulto Jovem
15.
J Cogn Neurosci ; 25(11): 1875-86, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23691984

RESUMO

Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129-135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.


Assuntos
Comportamento Imitativo/fisiologia , Córtex Pré-Frontal/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Voz/fisiologia , Acústica , Adulto , Análise de Variância , Mapeamento Encefálico , Interpretação Estatística de Dados , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiologia , Psicofisiologia
16.
Front Psychol ; 4: 148, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23554598

RESUMO

The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner's productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., "seed", pronounced [si:t(h)]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as "town" facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors.

18.
Neuropsychologia ; 50(9): 2154-64, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22609577

RESUMO

Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesised that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested 18 normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centred on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased grey matter volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech.


Assuntos
Adaptação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Individualidade , Imageamento por Ressonância Magnética , Masculino , Memória de Curto Prazo/fisiologia , Testes Neuropsicológicos , Desempenho Psicomotor/fisiologia , Adulto Jovem
19.
J Cogn Neurosci ; 23(4): 961-77, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20350182

RESUMO

This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762-778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480-486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47-89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior-medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor "templates" or "chunks" at the level of the syllable [Papoutsi, M., de Zwart, J. A., Jansma, J. M., Pickering, M. J., Bednar, J. A., & Horwitz, B. From phonemes to articulatory codes: an fMRI study of the role of Broca's area in speech production. Cerebral Cortex, 19, 2156-2165, 2009; Warren, J. E., Wise, R. J. S., & Warren, J. D. Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends in Neurosciences, 28, 636-643, 2005; Griffiths, T. D., & Warren, J. D. The planum temporale as a computational hub. Trends in Neurosciences, 25, 348-353, 2002], whereas more lateral structures on the STG may deal with phonetic analysis of the auditory input [Hickok, G. The functional neuroanatomy of language. Physics of Life Reviews, 6, 121-143, 2009].


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Memória de Curto Prazo/fisiologia , Fonética , Estimulação Acústica/métodos , Adulto , Análise de Variância , Encéfalo/anatomia & histologia , Encéfalo/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Linguística , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Tempo de Reação/fisiologia , Adulto Jovem
20.
J Neurosci ; 30(21): 7179-86, 2010 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-20505085

RESUMO

This study investigated the neural plasticity associated with perceptual learning of a cochlear implant (CI) simulation. Normal-hearing listeners were trained with vocoded and spectrally shifted speech simulating a CI while cortical responses were measured with functional magnetic resonance imaging (fMRI). A condition in which the vocoded speech was spectrally inverted provided a control for learnability and adaptation. Behavioral measures showed considerable individual variability both in the ability to learn to understand the degraded speech, and in phonological working memory capacity. Neurally, left-lateralized regions in superior temporal sulcus and inferior frontal gyrus (IFG) were sensitive to the learnability of the simulations, but only the activity in prefrontal cortex correlated with interindividual variation in intelligibility scores and phonological working memory. A region in left angular gyrus (AG) showed an activation pattern that reflected learning over the course of the experiment, and covariation of activity in AG and IFG was modulated by the learnability of the stimuli. These results suggest that variation in listeners' ability to adjust to vocoded and spectrally shifted speech is partly reflected in differences in the recruitment of higher-level language processes in prefrontal cortex, and that this variability may further depend on functional links between the left inferior frontal gyrus and angular gyrus. Differences in the engagement of left inferior prefrontal cortex, and its covariation with posterior parietal areas, may thus underlie some of the variation in speech perception skills that have been observed in clinical populations of CI users.


Assuntos
Implantes Cocleares , Lobo Frontal/fisiologia , Individualidade , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adaptação Fisiológica/fisiologia , Adulto , Feminino , Lobo Frontal/irrigação sanguínea , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Ruído , Oxigênio/sangue , Valor Preditivo dos Testes , Semântica , Espectrografia do Som , Análise Espectral , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...