Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychon Bull Rev ; 2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37848661

RESUMO

Simulation accounts of speech perception posit that speech is covertly imitated to support perception in a top-down manner. Behaviourally, covert imitation is measured through the stimulus-response compatibility (SRC) task. In each trial of a speech SRC task, participants produce a target speech sound whilst perceiving a speech distractor that either matches the target (compatible condition) or does not (incompatible condition). The degree to which the distractor is covertly imitated is captured by the automatic imitation effect, computed as the difference in response times (RTs) between compatible and incompatible trials. Simulation accounts disagree on whether covert imitation is enhanced when speech perception is challenging or instead when the speech signal is most familiar to the speaker. To test these accounts, we conducted three experiments in which participants completed SRC tasks with native and non-native sounds. Experiment 1 uncovered larger automatic imitation effects in an SRC task with non-native sounds than with native sounds. Experiment 2 replicated the finding online, demonstrating its robustness and the applicability of speech SRC tasks online. Experiment 3 intermixed native and non-native sounds within a single SRC task to disentangle effects of perceiving non-native sounds from confounding effects of producing non-native speech actions. This last experiment confirmed that automatic imitation is enhanced for non-native speech distractors, supporting a compensatory function of covert imitation in speech perception. The experiment also uncovered a separate effect of producing non-native speech actions on enhancing automatic imitation effects.

2.
Trends Hear ; 27: 23312165231192297, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37547940

RESUMO

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.


Assuntos
Percepção da Fala , Fala , Humanos , Aprendizagem , Ruído/efeitos adversos , Idioma
3.
Psychon Bull Rev ; 30(3): 1093-1102, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36443535

RESUMO

Observing someone perform an action automatically activates neural substrates associated with executing that action. This covert response, or automatic imitation, is measured behaviourally using the stimulus-response compatibility (SRC) task. In an SRC task, participants are presented with compatible and incompatible response-distractor pairings (e.g., an instruction to say "ba" paired with an audio recording of "da" as an example of an incompatible trial). Automatic imitation is measured as the difference in response times (RT) or accuracy between incompatible and compatible trials. Larger automatic imitation effects have been interpreted as a larger covert imitation response. Past results suggest that an action's biological status affects automatic imitation: Human-produced manual actions show enhanced automatic imitation effects compared with computer-generated actions. Per the integrated theory for language comprehension and production, action observation triggers a simulation process to recognize and interpret observed speech actions involving covert imitation. Human-generated actions are predicted to result in increased automatic imitation because the simulation process is predicted to engage more for actions produced by a speaker who is more similar to the listener. We conducted an online SRC task that presented participants with human and computer-generated speech stimuli to test this prediction. Participants responded faster to compatible than incompatible trials, showing an overall automatic imitation effect. Yet the human-generated and computer-generated vocal stimuli evoked similar automatic imitation effects. These results suggest that computer-generated speech stimuli evoke the same covert imitative response as human stimuli, thus rejecting predictions from the integrated theory of language comprehension and production.


Assuntos
Comportamento Imitativo , Fala , Humanos , Comportamento Imitativo/fisiologia , Tempo de Reação , Fala/fisiologia , Computadores
4.
Neuropsychologia ; 166: 108135, 2022 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-34958833

RESUMO

Motor areas for speech production activate during speech perception. Such activation may assist speech perception in challenging listening conditions. It is not known how ageing affects the recruitment of articulatory motor cortex during active speech perception. This study aimed to determine the effect of ageing on recruitment of speech motor cortex during speech perception. Single-pulse Transcranial Magnetic Stimulation (TMS) was applied to the lip area of left primary motor cortex (M1) to elicit lip Motor Evoked Potentials (MEPs). The M1 hand area was tested as a control site. TMS was applied whilst participants perceived syllables presented with noise (-10, 0, +10 dB SNRs) and without noise (clear). Participants detected and counted syllables throughout MEP recording. Twenty younger adult subjects (aged 18-25) and twenty older adult subjects (aged 65-78) participated in this study. Results indicated a significant interaction between age and noise condition in the syllable task. Specifically, older adults significantly misidentified syllables in the 0 dB SNR condition, and missed the syllables in the -10 dB SNR condition, relative to the clear condition. There were no differences between conditions for younger adults. There was a significant main effect of noise level on lip MEPs. Lip MEPs were unexpectedly inhibited in the 0 dB SNR condition relative to clear condition. There was no interaction between age group and noise condition. There was no main effect of noise or age group on control hand MEPs. These data suggest that speech-induced facilitation in articulatory motor cortex is abolished when performing a challenging secondary task, irrespective of age.


Assuntos
Córtex Motor , Percepção da Fala , Adolescente , Adulto , Idoso , Envelhecimento , Potencial Evocado Motor/fisiologia , Humanos , Córtex Motor/fisiologia , Fala/fisiologia , Percepção da Fala/fisiologia , Estimulação Magnética Transcraniana , Adulto Jovem
5.
J Speech Lang Hear Res ; 64(9): 3432-3445, 2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34463528

RESUMO

Purpose Visual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation. Method A group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group. Results Previous studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time. Conclusions The results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.


Assuntos
Percepção da Fala , Fala , Fixação Ocular , Humanos , Boca , Percepção Visual
6.
J Speech Lang Hear Res ; 64(7): 2513-2528, 2021 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-34161748

RESUMO

Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate. Supplemental Material https://doi.org/10.23641/asha.14810523.


Assuntos
Percepção da Fala , Fala , Percepção Auditiva , Sinais (Psicologia) , Humanos , Ruído , Percepção Visual
7.
J Acoust Soc Am ; 147(5): 3348, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32486777

RESUMO

Listening to degraded speech is associated with decreased intelligibility and increased effort. However, listeners are generally able to adapt to certain types of degradations. While intelligibility of degraded speech is modulated by talker acoustics, it is unclear whether talker acoustics also affect effort and adaptation. Moreover, it has been demonstrated that talker differences are preserved across spectral degradations, but it is not known whether this effect extends to temporal degradations and which acoustic-phonetic characteristics are responsible. In a listening experiment combined with pupillometry, participants were presented with speech in quiet as well as in masking noise, time-compressed, and noise-vocoded speech by 16 Southern British English speakers. Results showed that intelligibility, but not adaptation, was modulated by talker acoustics. Talkers who were more intelligible under noise-vocoding were also more intelligible under masking and time-compression. This effect was linked to acoustic-phonetic profiles with greater vowel space dispersion (VSD) and energy in mid-range frequencies, as well as slower speaking rate. While pupil dilation indicated increasing effort with decreasing intelligibility, this study also linked reduced effort in quiet to talkers with greater VSD. The results emphasize the relevance of talker acoustics for intelligibility and effort in degraded listening conditions.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Acústica , Humanos , Ruído , Mascaramento Perceptivo , Acústica da Fala
8.
J Acoust Soc Am ; 147(4): 2728, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32359293

RESUMO

Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception. Eighty-eight participants completed a speeded sentence-verification task with increases in accuracy and reductions in response times used to indicate performance. Audiological and cognitive task measures include pure tone audiometry, speech recognition threshold, working memory, vocabulary knowledge, attention switching, and pattern analysis. Despite previous studies suggesting that temporal and spectral/environmental perception require different lexical or phonological mechanisms, this study shows significant positive correlations in accuracy and response time performance across all distortions. Results of a principal component analysis and multiple linear regressions suggest that a component based on vocabulary knowledge and working memory predicted performance in the speech in quiet, time-compressed and speech in noise conditions. These results suggest that listeners employ a similar cognitive strategy to perceive different temporal and spectral/environmental speech distortions and that this mechanism is supported by vocabulary knowledge and working memory.


Assuntos
Percepção da Fala , Fala , Cognição , Humanos , Ruído/efeitos adversos , Testes de Discriminação da Fala
9.
J Cogn Neurosci ; 32(6): 1092-1103, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31933438

RESUMO

Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required. We addressed this issue using TMS to selectively disrupt processing in either the left or right superior temporal gyrus (STG) of healthy participants to test whether the left temporal lobe is sufficient or whether both left and right STG are essential. Participants repeated keywords from sentences presented in background noise in a speech reception threshold task while receiving online repetitive TMS separately to the left STG, right STG, or vertex or while receiving no TMS. Results show an equal drop in performance following application of TMS to either left or right STG during the task. A separate group of participants performed a visual discrimination threshold task to control for the confounding side effects of TMS. Results show no effect of TMS on the control task, supporting the notion that the results of Experiment 1 can be attributed to modulation of cortical functioning in STG rather than to side effects associated with online TMS. These results indicate that successful speech perception in everyday listening conditions requires both left and right STG and thus have ramifications for our understanding of the neural organization of spoken language processing.


Assuntos
Lateralidade Funcional/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Magnética Transcraniana , Adolescente , Adulto , Feminino , Humanos , Masculino , Ruído , Limiar Sensorial/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
10.
Q J Exp Psychol (Hove) ; 72(12): 2833-2847, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31331238

RESUMO

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.


Assuntos
Atenção/fisiologia , Emoções/fisiologia , Função Executiva/fisiologia , Comportamento Imitativo/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Humanos , Adulto Jovem
11.
Front Hum Neurosci ; 13: 195, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31244631

RESUMO

Motor imagery refers to the phenomenon of imagining performing an action without action execution. Motor imagery and motor execution are assumed to share a similar underlying neural system that involves primary motor cortex (M1). Previous studies have focused on motor imagery of manual actions, but articulatory motor imagery has not been investigated. In this study, transcranial magnetic stimulation (TMS) was used to elicit motor-evoked potentials (MEPs) from the articulatory muscles [orbicularis oris (OO)] as well as from hand muscles [first dorsal interosseous (FDI)]. Twenty participants were asked to execute or imagine performing a simple squeezing task involving a pair of tweezers, which was comparable across both effectors. MEPs were elicited at six time points (50, 150, 250, 350, 450, 550 ms post-stimulus) to track the time course of M1 involvement in both lip and hand tasks. The results showed increased MEP amplitudes for action execution compared to rest for both effectors at time points 350, 450 and 550 ms, but we found no evidence of increased cortical activation for motor imagery. The results indicate that motor imagery does not involve M1 for simple tasks for manual or articulatory muscles. The results have implications for models of mental imagery of simple articulatory gestures, in that no evidence is found for somatotopic activation of lip muscles in sub-phonemic contexts during motor imagery of such tasks, suggesting that motor simulation of relatively simple actions does not involve M1.

12.
Psychon Bull Rev ; 26(5): 1711-1718, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31197755

RESUMO

The observation-execution links underlying automatic-imitation processes are suggested to result from associative sensorimotor experience of performing and watching the same actions. Past research supporting the associative sequence learning (ASL) model has demonstrated that sensorimotor training modulates automatic imitation of perceptually transparent manual actions, but ASL has been criticized for not being able to account for opaque actions, such as orofacial movements that include visual speech. To investigate whether the observation-execution links underlying opaque actions are as flexible as has been demonstrated for transparent actions, we tested whether sensorimotor training modulated the automatic imitation of visual speech. Automatic imitation was defined as a facilitation in response times for syllable articulation (ba or da) when in the presence of a compatible visual speech distractor, relative to when in the presence of an incompatible distractor. Participants received either mirror (say /ba/ when the speaker silently says /ba/, and likewise for /da/) or countermirror (say /da/ when the speaker silently says /ba/, and vice versa) training, and automatic imitation was measured before and after training. The automatic-imitation effect was enhanced following mirror training and reduced following countermirror training, suggesting that sensorimotor learning plays a critical role in linking speech perception and production, and that the links between these two systems remain flexible in adulthood. Additionally, as compared to manual movements, automatic imitation of speech was susceptible to mirror training but was relatively resilient to countermirror training. We propose that social factors and the multimodal nature of speech might account for the resilience to countermirror training of sensorimotor associations of speech actions.


Assuntos
Atenção/fisiologia , Comportamento Imitativo/fisiologia , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino
14.
Front Neurosci ; 12: 683, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30483044

RESUMO

This study aimed to characterize effects of coil orientation on the size of Motor Evoked Potentials (MEPs) from both sides of Orbicularis Oris (OO) and both First Dorsal Interosseous (FDI) muscles, following stimulation to left lip and left hand Primary Motor Cortex. Using a 70 mm figure-of-eight coil, we collected MEPs from eight different orientations while recording from contralateral and ipsilateral OO and FDI using a monophasic pulse delivered at 120% active motor threshold. MEPs from OO were evoked consistently for six orientations for contralateral and ipsilateral sites. Contralateral orientations 0°, 45°, 90°, and 315° were found to best elicit OO MEPs with a likely cortical origin. The largest FDI MEPs were recorded for contralateral 45°, invoking a posterior-anterior (PA) current flow. Orientations traditionally used for FDI were also found to be suitable for eliciting OO MEPs. Individuals vary more in their optimal orientation for OO than for FDI. It is recommended that researchers iteratively probe several orientations when eliciting MEPs from OO. Several orientations likely induced direct activation of facial muscles.

15.
Atten Percept Psychophys ; 80(5): 1290-1299, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29536418

RESUMO

When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant-vowel-consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter's duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points.


Assuntos
Comportamento Imitativo/fisiologia , Fonética , Tempo de Reação/fisiologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Masculino , Fala/fisiologia , Gravação em Vídeo/métodos , Adulto Jovem
16.
Brain Lang ; 187: 74-82, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29397191

RESUMO

Primary motor (M1) areas for speech production activate during speechperception. It has been suggested that such activation may be dependent upon modulatory inputs from premotor cortex (PMv). If and how PMv differentially modulates M1 activity during perception of speech that is easy or challenging to understand, however, is unclear. This study aimed to test the link between PMv and M1 during challenging speech perception in two experiments. The first experiment investigated intra-hemispheric connectivity between left hemisphere PMv and left M1 lip area during comprehension of speech under clear and distorted listening conditions. Continuous theta burst stimulation (cTBS) was applied to left PMv in eighteen participants (aged 18-35). Post-cTBS, participants performed a sentence verification task on distorted (imprecisely articulated), and clear speech, whilst also undergoing stimulation of the lip representation in the left M1 to elicit motor evoked potentials (MEPs). In a second, separate experiment, we investigated the role of inter-hemispheric connectivity between right hemisphere PMv and left hemisphere M1 lip area. Dual-coil transcranial magnetic stimulation was applied to right PMv and left M1 lip in fifteen participants (aged 18-35). Results indicated that disruption of PMv during speech perception affects comprehension of distorted speech specifically. Furthermore, our data suggest that listening to distorted speech modulates the balance of intra- and inter-hemispheric interactions, with a larger sensorimotor network implicated during comprehension of distorted speech than when speech perception is optimal. The present results further understanding of PMv-M1 interactions during auditory-motor integration.


Assuntos
Córtex Motor/fisiologia , Percepção da Fala , Adolescente , Adulto , Potencial Evocado Motor , Feminino , Lateralidade Funcional , Humanos , Masculino
17.
Neuroimage ; 159: 18-31, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28669904

RESUMO

Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.


Assuntos
Encéfalo/fisiologia , Aprendizagem/fisiologia , Multilinguismo , Fala/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Adulto Jovem
18.
Cereb Cortex ; 27(5): 3064-3079, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-28334401

RESUMO

Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants' vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST.


Assuntos
Mapeamento Encefálico , Laringe/diagnóstico por imagem , Lábio/diagnóstico por imagem , Córtex Sensório-Motor/fisiologia , Fala/fisiologia , Língua/diagnóstico por imagem , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Palato Mole/diagnóstico por imagem , Córtex Sensório-Motor/diagnóstico por imagem , Acústica da Fala , Adulto Jovem
19.
Neuropsychologia ; 94: 13-22, 2017 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-27884757

RESUMO

Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role for the motor cortex in compensating for differences in hearing ability.


Assuntos
Lábio/fisiologia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Eletromiografia , Potencial Evocado Motor , Músculos Faciais/fisiologia , Feminino , Testes Auditivos , Humanos , Modelos Lineares , Masculino , Estimulação Magnética Transcraniana , Adulto Jovem
20.
Neuroimage ; 128: 218-226, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26732405

RESUMO

It has become increasingly evident that human motor circuits are active during speech perception. However, the conditions under which the motor system modulates speech perception are not clear. Two prominent accounts make distinct predictions for how listening to speech engages speech motor representations. The first account suggests that the motor system is most strongly activated when observing familiar actions (Pickering and Garrod, 2013). Conversely, Wilson and Knoblich's account asserts that motor excitability is greatest when observing less familiar, ambiguous actions (Wilson and Knoblich, 2005). We investigated these predictions using transcranial magnetic stimulation (TMS). Stimulation of the lip and hand representations in the left primary motor cortex elicited motor evoked potentials (MEPs) indexing the excitability of the underlying motor representation. MEPs for lip, but not for hand, were larger during perception of distorted speech produced using a tongue depressor, relative to naturally produced speech. Additional somatotopic facilitation yielded significantly larger MEPs during perception of lip-articulated distorted speech sounds relative to distorted tongue-articulated sounds. Critically, there was a positive correlation between MEP size and the perception of distorted speech sounds. These findings were consistent with predictions made by Wilson & Knoblich (Wilson and Knoblich, 2005), and provide direct evidence of increased motor excitability when speech perception is difficult.


Assuntos
Potencial Evocado Motor/fisiologia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Eletromiografia , Feminino , Humanos , Lábio/inervação , Masculino , Estimulação Magnética Transcraniana , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...