Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
Add more filters










Publication year range
1.
Brain Lang ; 253: 105415, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38692095

ABSTRACT

With age, the speech system undergoes important changes that render speech production more laborious, slower and often less intelligible. And yet, the neural mechanisms that underlie these age-related changes remain unclear. In this EEG study, we examined two important mechanisms in speech motor control: pre-speech movement-related cortical potential (MRCP), which reflects speech motor planning, and speaking-induced suppression (SIS), which indexes auditory predictions of speech motor commands, in 20 healthy young and 20 healthy older adults. Participants undertook a vowel production task which was followed by passive listening of their own recorded vowels. Our results revealed extensive differences in MRCP in older compared to younger adults. Further, while longer latencies were observed in older adults on N1 and P2, in contrast, the SIS was preserved. The observed reduced MRCP appears as a potential explanatory mechanism for the known age-related slowing of speech production, while preserved SIS suggests intact motor-to-auditory integration.


Subject(s)
Aging , Electroencephalography , Speech , Humans , Speech/physiology , Aged , Male , Female , Adult , Aging/physiology , Young Adult , Middle Aged , Cerebral Cortex/physiology , Movement/physiology , Speech Perception/physiology , Evoked Potentials/physiology
2.
Neuropsychologia ; 198: 108866, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38518889

ABSTRACT

Previous psychophysical and neurophysiological studies in young healthy adults have provided evidence that audiovisual speech integration occurs with a large degree of temporal tolerance around true simultaneity. To further determine whether audiovisual speech asynchrony modulates auditory cortical processing and neural binding in young healthy adults, N1/P2 auditory evoked responses were compared using an additive model during a syllable categorization task, without or with an audiovisual asynchrony ranging from 240 ms visual lead to 240 ms auditory lead. Consistent with previous psychophysical findings, the observed results converge in favor of an asymmetric temporal integration window. Three main findings were observed: 1) predictive temporal and phonetic cues from pre-phonatory visual movements before the acoustic onset appeared essential for neural binding to occur, 2) audiovisual synchrony, with visual pre-phonatory movements predictive of the onset of the acoustic signal, was a prerequisite for N1 latency facilitation, and 3) P2 amplitude suppression and latency facilitation occurred even when visual pre-phonatory movements were not predictive of the acoustic onset but the syllable to come. Taken together, these findings help further clarify how audiovisual speech integration partly operates through two stages of visually-based temporal and phonetic predictions.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Speech Perception , Visual Perception , Humans , Male , Female , Young Adult , Adult , Speech Perception/physiology , Visual Perception/physiology , Evoked Potentials, Auditory/physiology , Photic Stimulation , Reaction Time/physiology , Speech/physiology , Auditory Perception/physiology
3.
Brain Lang ; 247: 105359, 2023 12.
Article in English | MEDLINE | ID: mdl-37951157

ABSTRACT

Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable. However, although stronger than when preceded by a visual syllable, lower adaptation was observed when the auditory syllable was preceded by an audiovisual compared to an auditory syllable. In addition, longer N1 and P2 latencies were then observed. These results further demonstrate that visual speech acts on auditory memory but suggest competing visual influences in the case of audiovisual stimulation.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Speech , Electroencephalography , Visual Perception/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Photic Stimulation
4.
Brain Lang ; 235: 105196, 2022 12.
Article in English | MEDLINE | ID: mdl-36343508

ABSTRACT

In face-to-face communication, visual information from a speaker's face and time-varying kinematics of articulatory movements have been shown to fine-tune auditory neural processing and improve speech recognition. To further determine whether the timing of visual gestures modulates auditory cortical processing, three sets of syllables only differing in the onset and duration of silent prephonatory movements, before the acoustic speech signal, were contrasted using EEG. Despite similar visual recognition rates, an increase in the amplitude of P2 auditory evoked responses was observed from the longest to the shortest movements. Taken together, these results clarify how audiovisual speech perception partly operates through visually-based predictions and related processing time, with acoustic-phonetic neural processing paralleling the timing of visual prephonatory gestures.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Visual Perception/physiology , Auditory Perception/physiology , Speech Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation
5.
Cortex ; 152: 21-35, 2022 07.
Article in English | MEDLINE | ID: mdl-35490663

ABSTRACT

During speaking or listening, endogenous motor or exogenous visual processes have been shown to fine-tune the auditory neural processing of incoming acoustic speech signal. To compare the impact of these cross-modal effects on auditory evoked responses, two sets of speech production and perception tasks were contrasted using EEG. In a first set, participants produced vowels in a self-paced manner while listening to their auditory feedback. Following the production task, they passively listened to the entire recorded speech sequence. In a second set, the procedure was identical except that participants also watched online their own articulatory movements. While both endogenous motor and exogenous visual processes fine-tuned auditory neural processing, these cross-modal effects were found to act differentially on the amplitude and latency of auditory evoked responses. A reduced amplitude was observed on auditory evoked responses during speaking compared to listening, irrespective of the auditory or audiovisual feedback. Adding orofacial visual movements to the acoustic speech signal also speeded up the latency of auditory evoked responses, irrespective of the perception or production task. Taken together, these results suggest distinct motor and visual influences on auditory neural processing, possibly through different neural gating and predictive mechanisms.


Subject(s)
Speech Perception , Acoustic Stimulation , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Feedback, Sensory/physiology , Humans , Speech/physiology , Speech Perception/physiology
6.
Brain Lang ; 225: 105058, 2022 02.
Article in English | MEDLINE | ID: mdl-34929531

ABSTRACT

Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers' brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word's onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.


Subject(s)
Phonetics , Speech Perception , Acoustic Stimulation , Electroencephalography/methods , Humans , Speech/physiology , Speech Perception/physiology , Visual Perception/physiology
7.
Neuropsychologia ; 159: 107949, 2021 08 20.
Article in English | MEDLINE | ID: mdl-34228997

ABSTRACT

The ability to process speech evolves over the course of the lifespan. Understanding speech at low acoustic intensity and in the presence of background noise becomes harder, and the ability for older adults to benefit from audiovisual speech also appears to decline. These difficulties can have important consequences on quality of life. Yet, a consensus on the cause of these difficulties is still lacking. The objective of this study was to examine the processing of speech in young and older adults under different modalities (i.e. auditory [A], visual [V], audiovisual [AV]) and in the presence of different visual prediction cues (i.e., no predictive cue (control), temporal predictive cue, phonetic predictive cue, and combined temporal and phonetic predictive cues). We focused on recognition accuracy and four auditory evoked potential (AEP) components: P1-N1-P2 and N2. Thirty-four right-handed French-speaking adults were recruited, including 17 younger adults (28 ± 2 years; 20-42 years) and 17 older adults (67 ± 3.77 years; 60-73 years). Participants completed a forced-choice speech identification task. The main findings of the study are: (1) The faciliatory effect of visual information was reduced, but present, in older compared to younger adults, (2) visual predictive cues facilitated speech recognition in younger and older adults alike, (3) age differences in AEPs were localized to later components (P2 and N2), suggesting that aging predominantly affects higher-order cortical processes related to speech processing rather than lower-level auditory processes. (4) Specifically, AV facilitation on P2 amplitude was lower in older adults, there was a reduced effect of the temporal predictive cue on N2 amplitude for older compared to younger adults, and P2 and N2 latencies were longer for older adults. Finally (5) behavioural performance was associated with P2 amplitude in older adults. Our results indicate that aging affects speech processing at multiple levels, including audiovisual integration (P2) and auditory attentional processes (N2). These findings have important implications for understanding barriers to communication in older ages, as well as for the development of compensation strategies for those with speech processing difficulties.


Subject(s)
Cues , Speech Perception , Acoustic Stimulation , Aged , Auditory Perception , Humans , Middle Aged , Quality of Life , Speech , Visual Perception
8.
Neuropsychologia ; 140: 107404, 2020 03 16.
Article in English | MEDLINE | ID: mdl-32087207

ABSTRACT

The neurobiology of sex differences during language processing has been widely investigated in the past three decades. While substantial sex differences have been reported, empirical findings however appear largely equivocal. The present systematic review of the literature and meta-analysis aimed to determine the degree of agreement among studies reporting sex differences in cortical activity during language processing. Irrespective of the modality and the specificity of the language task, sex differences in the BOLD signal or cerebral blood flow was highly inconsistent across fMRI and PET studies. On the temporal side, earlier latency of auditory evoked responses for female compared to male participants were consistently observed in EEG studies during both listening and speaking. Overall, the present review and meta-analysis support the theoretical assumption that there are much more similarities than differences between men and women in the human brain during language processing. Subtle but consistent temporal differences are however observed in the auditory processing of phonetic cues during speech perception and production.


Subject(s)
Language , Speech Perception , Adult , Auditory Perception , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Sex Characteristics
9.
Neuropsychologia ; 136: 107267, 2020 01.
Article in English | MEDLINE | ID: mdl-31770550

ABSTRACT

In order to determine the neural substrates of phonemic coding during both listening and speaking, we used a repetition suppression (RS) paradigm in which vowels were repeatedly perceived or produced while measuring BOLD activity with sparse sampling functional magnetic resonance imaging (fMRI). RS refers to the phenomenon that repeated stimuli or actions lead to decreased activity in specific neural populations associated with enhanced neural selectivity and information coding efficiency. Common suppressed BOLD responses during repeated vowel perception and production were observed in the inferior frontal gyri, the posterior part of the left middle temporal gyrus and superior temporal sulcus, the left intraprietal sulcus, as well as in the cingulate gyrus and presupplementary motor area. By providing evidence for common adaptive neural changes in premotor and associative auditory and somatosensory brain areas, the observed RS effects suggest that phonemic coding is partly driven by shared sensorimotor regions in the listening and speaking brain.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Psycholinguistics , Speech Perception/physiology , Speech/physiology , Adult , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Phonetics , Sensorimotor Cortex/diagnostic imaging , Sensorimotor Cortex/physiology , Young Adult
10.
Brain Lang ; 199: 104694, 2019 12.
Article in English | MEDLINE | ID: mdl-31586790

ABSTRACT

The aim of the present study was to uncover a possible common neural organizing principle in spoken and written communication, through the coupling of perceptual and motor representations. In order to identify possible shared neural substrates for processing the basic units of spoken and written language, a sparse sampling fMRI acquisition protocol was performed on the same subjects in two experimental sessions with similar sets of letters being read and written and of phonemes being heard and orally produced. We found evidence of common premotor regions activated in spoken and written language, both in perception and in production. The location of those brain regions was confined to the left lateral and medial frontal cortices, at locations corresponding to the premotor cortex, inferior frontal cortex and supplementary motor area. Interestingly, the speaking and writing tasks also appeared to be controlled by largely overlapping networks, possibly indicating some domain general cognitive processing. Finally, the spatial distribution of individual activation peaks further showed more dorsal and more left-lateralized premotor activations in written than in spoken language.


Subject(s)
Motor Cortex/physiology , Reading , Speech Perception , Speech , Writing , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male
11.
Exp Brain Res ; 237(12): 3143-3153, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31576421

ABSTRACT

An impressive number of theoretical proposals and neurobiological studies argue that perceptual processing is not strictly feedforward but rather operates through an interplay between bottom-up sensory and top-down predictive mechanisms. The present EEG study aimed to further determine how prior knowledge on auditory syllables may impact speech perception. Prior knowledge was manipulated by presenting the participants with visual information indicative of the syllable onset (when), its phonetic content (what) and/or its articulatory features (how). While when and what predictions consisted of unnatural visual cues (i.e., a visual timeline and a visuo-orthographic cue), how prediction consisted of the visual movements of a speaker. During auditory speech perception, when and what predictions both attenuated the amplitude of N1/P2 auditory evoked potentials. Regarding how prediction, not only an amplitude decrease but also a latency facilitation of N1/P2 auditory evoked potentials were observed during audiovisual compared to unimodal speech perception. However, when and what predictability effects were then reduced or abolished, with only what prediction reducing P2 amplitude but increasing latency. Altogether, these results demonstrate the influence of when, what and how visually induced predictions at an early stage on cortical auditory speech processing. Crucially, they indicate a preponderant predictive role of the speaker's articulatory gestures during audiovisual speech perception, likely driven by attentional load and focus.


Subject(s)
Anticipation, Psychological/physiology , Cerebral Cortex/physiology , Evoked Potentials, Auditory/physiology , Gestures , Speech Perception/physiology , Visual Perception/physiology , Adult , Electroencephalography , Female , Humans , Male , Psycholinguistics , Reading , Young Adult
12.
Brain Lang ; 187: 92-103, 2018 12.
Article in English | MEDLINE | ID: mdl-29402437

ABSTRACT

In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct.


Subject(s)
Auditory Cortex/physiology , Speech Perception , Speech , Adult , Evoked Potentials, Auditory , Feedback, Sensory , Female , Humans , Male
13.
Ear Hear ; 39(1): 139-149, 2018.
Article in English | MEDLINE | ID: mdl-28753162

ABSTRACT

OBJECTIVES: The goal of this study was to determine the effect of auditory deprivation and age-related speech decline on perceptuo-motor abilities during speech processing in post-lingually deaf cochlear-implanted participants and in normal-hearing elderly (NHE) participants. DESIGN: A close-shadowing experiment was carried out on 10 cochlear-implanted patients and on 10 NHE participants, with two groups of normal-hearing young participants as controls. To this end, participants had to categorize auditory and audiovisual syllables as quickly as possible, either manually or orally. Reaction times and percentages of correct responses were compared depending on response modes, stimulus modalities, and syllables. RESULTS: Responses of cochlear-implanted subjects were globally slower and less accurate than those of both young and elderly normal-hearing people. Adding the visual modality was found to enhance performance for cochlear-implanted patients, whereas no significant effect was obtained for the NHE group. Critically, oral responses were faster than manual ones for all groups. In addition, for NHE participants, manual responses were more accurate than oral responses, as was the case for normal-hearing young participants when presented with noisy speech stimuli. CONCLUSIONS: Faster reaction times were observed for oral than for manual responses in all groups, suggesting that perceptuo-motor relationships were somewhat successfully functional after cochlear implantation and remain efficient in the NHE group. These results are in agreement with recent perceptuo-motor theories of speech perception. They are also supported by the theoretical assumption that implicit motor knowledge and motor representations partly constrain auditory speech processing. In this framework, oral responses would have been generated at an earlier stage of a sensorimotor loop, whereas manual responses would appear late, leading to slower but more accurate responses. The difference between oral and manual responses suggests that the perceptuo-motor loop is still effective for NHE subjects and also for cochlear-implanted participants, despite degraded global performance.


Subject(s)
Aging/physiology , Auditory Perception/physiology , Cochlear Implants , Deafness/physiopathology , Hearing/physiology , Adult , Aged , Deafness/psychology , Female , Humans , Male , Middle Aged , Sensory Deprivation/physiology
14.
Neuropsychologia ; 109: 126-133, 2018 01 31.
Article in English | MEDLINE | ID: mdl-29248497

ABSTRACT

Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge.


Subject(s)
Brain/physiology , Motion Perception/physiology , Speech Perception/physiology , Tongue , Adult , Electroencephalography , Evoked Potentials , Feedback , Female , Humans , Lip , Lipreading , Male , Middle Aged , Pattern Recognition, Visual/physiology , Social Perception , Young Adult
15.
Exp Brain Res ; 235(9): 2867-2876, 2017 09.
Article in English | MEDLINE | ID: mdl-28676921

ABSTRACT

Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.


Subject(s)
Evoked Potentials, Auditory/physiology , Gestures , Psychomotor Performance/physiology , Speech Perception/physiology , Speech/physiology , Visual Perception/physiology , Adult , Ego , Electroencephalography , Female , Humans , Lip/physiology , Male , Young Adult
16.
Neuropsychologia ; 101: 39-46, 2017 Jul 01.
Article in English | MEDLINE | ID: mdl-28483485

ABSTRACT

Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f0 from their own mean f0 was measured to evaluate the ability to converge to each acoustic target. RESULTS: showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation.


Subject(s)
Cochlear Implantation , Deafness/rehabilitation , Imitative Behavior , Motor Skills , Phonetics , Speech , Acoustic Stimulation , Adult , Aged , Deafness/physiopathology , Female , Humans , Male , Middle Aged , Speech Perception , Young Adult
17.
Hum Brain Mapp ; 38(5): 2751-2771, 2017 05.
Article in English | MEDLINE | ID: mdl-28263012

ABSTRACT

Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc.


Subject(s)
Aging , Attention/physiology , Brain Mapping , Brain/physiology , Movement/physiology , Speech/physiology , Acoustic Stimulation , Acoustics , Adult , Aged , Brain/diagnostic imaging , Cross-Sectional Studies , Female , Head Movements , Humans , Image Processing, Computer-Assisted , Male , Middle Aged , Neuropsychological Tests , Oxygen/blood , Young Adult
18.
J Cogn Neurosci ; 29(3): 448-466, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28139959

ABSTRACT

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.


Subject(s)
Brain/physiology , Facial Recognition/physiology , Motion Perception/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Brain/diagnostic imaging , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Photic Stimulation/methods , Reaction Time , Social Perception , Young Adult
19.
J Cogn Neurosci ; 27(2): 334-51, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25203272

ABSTRACT

Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback.


Subject(s)
Adaptation, Physiological/physiology , Face/physiology , Learning/physiology , Motor Activity/physiology , Speech/physiology , Adaptation, Psychological/physiology , Brain Mapping , Humans , Inhibition, Psychological , Magnetic Resonance Imaging , Neuropsychological Tests
20.
Brain Struct Funct ; 220(2): 979-97, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24402675

ABSTRACT

Speech perception difficulties are common among elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties has challenged the traditional view that peripheral hearing loss was the main factor in the etiology of these difficulties. Here, we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right supplementary motor area (SMA). Age-dependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal modulated the relationship between age and speech perception skills suggesting a role for this region in maintaining speech perception in older ages. These results provide important new insights into the neurobiology of speech perception in aging.


Subject(s)
Aging/psychology , Auditory Cortex/physiopathology , Motor Cortex/physiopathology , Presbycusis/etiology , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Aging/pathology , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Cortex/pathology , Auditory Threshold , Brain Mapping/methods , Cellular Senescence , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Motor Cortex/pathology , Otoacoustic Emissions, Spontaneous , Presbycusis/pathology , Presbycusis/physiopathology , Presbycusis/psychology , Psychoacoustics , Speech Intelligibility , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...