Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; : 1-27, 2024 Mar 08.
Article in English | MEDLINE | ID: mdl-38457261

ABSTRACT

PURPOSE: One of the strategies that can be used to support speech communication in deaf children is cued speech, a visual code in which manual gestures are used as additional phonological information to supplement the acoustic and labial speech information. Cued speech has been shown to improve speech perception and phonological skills. This exploratory study aims to assess whether and how cued speech reading proficiency may also have a beneficial effect on the acoustic and articulatory correlates of consonant production in children. METHOD: Eight children with cochlear implants (from 5 to 11 years of age) and with different receptive proficiency in Canadian French Cued Speech (three children with low receptive proficiency vs. five children with high receptive proficiency) are compared to 10 children with typical hearing (from 4 to 11 years of age) on their production of stop and fricative consonants. Articulation was assessed with ultrasound measurements. RESULTS: The preliminary results reveal that cued speech proficiency seems to sustain the development of speech production in children with cochlear implants and to improve their articulatory gestures, particularly for the place contrast in stops as well as fricatives. CONCLUSION: This work highlights the importance of studying objective data and comparing acoustic and articulatory measurements to better characterize speech production in children.

2.
Front Hum Neurosci ; 17: 1152516, 2023.
Article in English | MEDLINE | ID: mdl-37250702

ABSTRACT

Introduction: Early exposure to a rich linguistic environment is essential as soon as the diagnosis of deafness is made. Cochlear implantation (CI) allows children to have access to speech perception in their early years. However, it provides only partial acoustic information, which can lead to difficulties in perceiving some phonetic contrasts. This study investigates the contribution of two spoken speech and language rehabilitation approaches to speech perception in children with CI using a lexicality judgment task from the EULALIES battery. Auditory Verbal Therapy (AVT) is an early intervention program that relies on auditory learning to enhance hearing skills in deaf children with CI. French Cued Speech, also called Cued French (CF), is a multisensory communication tool that disambiguates lip reading by adding a manual gesture. Methods: In this study, 124 children aged from 60 to 140 months were included: 90 children with typical hearing skills (TH), 9 deaf children with CI who had participated in an AVT program (AVT), 6 deaf children with CI with high Cued French reading skills (CF+), and 19 deaf children with CI with low Cued French reading skills (CF-). Speech perception was assessed using sensitivity (d') using both the hit and false alarm rates, as defined in signal-detection theory. Results: The results show that children with cochlear implants from the CF- and CF+ groups have significantly lower performance compared to children with typical hearing (TH) (p < 0.001 and p = 0.033, respectively). Additionally, children in the AVT group also tended to have lower scores compared to TH children (p = 0.07). However, exposition to AVT and CF seems to improve speech perception. The scores of the children in the AVT and CF+ groups are closer to typical scores than those of children in the CF- group, as evidenced by a distance measure. Discussion: Overall, the findings of this study provide evidence for the effectiveness of these two speech and language rehabilitation approaches, and highlight the importance of using a specific approach in addition to a cochlear implant to improve speech perception in children with cochlear implants.

3.
Neuropsychologia ; 176: 108392, 2022 11 05.
Article in English | MEDLINE | ID: mdl-36216084

ABSTRACT

A computational model of speech perception, COSMO (Laurent et al., 2017), predicts that speech sounds should evoke both auditory representations in temporal areas and motor representations mainly in inferior frontal areas. Importantly, the model also predicts that auditory representations should be narrower, i.e. more focused on typical stimuli, than motor representations which would be more tolerant of atypical stimuli. Based on these assumptions, in a repetition-suppression study with functional magnetic resonance imaging data, we show that a sequence of 4 identical vowel sounds produces lower cortical activity (i.e. larger suppression effects) than if the last sound in the sequence is slightly varied. Crucially, temporal regions display an increase in cortical activity even for small acoustic variations, indicating a release of the suppression effect even for stimuli acoustically close to the first stimulus. In contrast, inferior frontal, premotor, insular and cerebellar regions show a release of suppression for larger acoustic variations. This "auditory-narrow motor-wide" pattern for vowel stimuli adds to a number of similar findings on consonant stimuli, confirming that the selectivity of speech sound representations in temporal auditory areas is narrower than in frontal motor areas in the human cortex.


Subject(s)
Auditory Cortex , Motor Cortex , Speech Perception , Humans , Motor Cortex/physiology , Acoustic Stimulation/methods , Brain Mapping/methods , Auditory Cortex/physiology , Speech Perception/physiology , Magnetic Resonance Imaging , Auditory Perception/physiology
4.
Dev Sci ; 25(1): e13154, 2022 01.
Article in English | MEDLINE | ID: mdl-34251076

ABSTRACT

Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.


Subject(s)
Gestures , Speech Perception , Adult , Child , Child, Preschool , Cues , Humans , Language , Language Development , Speech
5.
Geroscience ; 43(4): 1725-1765, 2021 08.
Article in English | MEDLINE | ID: mdl-33970414

ABSTRACT

In the absence of any neuropsychiatric condition, older adults may show declining performance in several cognitive processes and among them, in retrieving and producing words, reflected in slower responses and even reduced accuracy compared to younger adults. To overcome this difficulty, healthy older adults implement compensatory strategies, which are the focus of this paper. We provide a review of mainstream findings on deficient mechanisms and possible neurocognitive strategies used by older adults to overcome the deleterious effects of age on lexical production. Moreover, we present findings on genetic and lifestyle factors that might either be protective or risk factors of cognitive impairment in advanced age. We propose that "aging-modulating factors" (AMF) can be modified, offering prevention opportunities against aging effects. Based on our review and this proposition, we introduce an integrative neurocognitive model of mechanisms and compensatory strategies for lexical production in older adults (entitled Lexical Access and Retrieval in Aging, LARA). The main hypothesis defended in LARA is that cognitive aging evolves heterogeneously and involves complementary domain-general and domain-specific mechanisms, with substantial inter-individual variability, reflected at behavioral, cognitive, and brain levels. Furthermore, we argue that the ability to compensate for the effect of cognitive aging depends on the amount of reserve specific to each individual which is, in turn, modulated by the AMF. Our conclusion is that a variety of mechanisms and compensatory strategies coexist in the same individual to oppose the effect of age. The role of reserve is pivotal for a successful coping with age-related changes and future research should continue to explore the modulating role of AMF.


Subject(s)
Cognitive Reserve , Age Factors , Brain
6.
J Acoust Soc Am ; 149(1): 191, 2021 01.
Article in English | MEDLINE | ID: mdl-33514144

ABSTRACT

Acoustic characteristics, lingual and labial articulatory dynamics, and ventilatory behaviors were studied on a beatboxer producing twelve drum sounds belonging to five main categories of his repertoire (kick, snare, hi-hat, rimshot, cymbal). Various types of experimental data were collected synchronously (respiratory inductance plethysmography, electroglottography, electromagnetic articulography, and acoustic recording). Automatic unsupervised classification was successfully applied on acoustic data with t-SNE spectral clustering technique. A cluster purity value of 94% was achieved, showing that each sound has a specific acoustic signature. Acoustical intensity of sounds produced with the humming technique was found to be significantly lower than their non-humming counterparts. For these sounds, a dissociation between articulation and breathing was observed. Overall, a wide range of articulatory gestures was observed, some of which were non-linguistic. The tongue was systematically involved in the articulation of the explored beatboxing sounds, either as the main articulator or as accompanying the lip dynamics. Two pulmonic and three non-pulmonic airstream mechanisms were identified. Ejectives were found in the production of all the sounds with bilabial occlusion or alveolar occlusion with egressive airstream. A phonetic annotation using the IPA alphabet was performed, highlighting the complexity of such sound production and the limits of speech-based annotation.


Subject(s)
Phonetics , Speech , Acoustics , Electromagnetic Phenomena , Humans , Music , Tongue/diagnostic imaging
7.
Int J Psychophysiol ; 159: 23-36, 2021 01.
Article in English | MEDLINE | ID: mdl-33159987

ABSTRACT

Previous research showed that mental rumination, considered as a form of repetitive and negative inner speech, is associated with increased facial muscular activity. However, the relation between these muscular activations and the underlying mental processes is still unclear. In this study, we tried to separate the facial electromyographic correlates of induced rumination related to either i) mechanisms of (inner) speech production or ii) rumination as a state of pondering on negative affects. To this end, we compared two groups of participants submitted to two types of rumination induction (for a total of 85 female undergraduate students without excessive depressive symptoms). The first type of induction was designed to specifically induce rumination in a verbal modality whereas the second one was designed to induce rumination in a visual modality. Following the motor simulation view of inner speech production, we hypothesised that the verbal rumination induction should result in a higher increase of activity in the speech-related muscles as compared to the non-verbal rumination induction. We also hypothesised that relaxation focused on the orofacial area should be more efficient in reducing rumination (when experienced in a verbal modality) than a relaxation focused on a non-orofacial area. Our results do not corroborate these hypotheses, as both rumination inductions resulted in a similar increase of peripheral muscular activity in comparison to baseline levels. Moreover, the two relaxation types were similarly efficient in reducing rumination, whatever the rumination induction. We discuss these results in relation to the inner speech literature and suggest that because rumination is a habitual and automatic form of emotion regulation, it might be a particularly (strongly) internalised and condensed form of inner speech. Pre-registered protocol, preprint, data, as well as reproducible code and figures are available at: https://osf.io/c9pag/.


Subject(s)
Cognition , Speech , Face , Female , Humans , Students
8.
PLoS One ; 15(5): e0233282, 2020.
Article in English | MEDLINE | ID: mdl-32459800

ABSTRACT

Although having a long history of scrutiny in experimental psychology, it is still controversial whether wilful inner speech (covert speech) production is accompanied by specific activity in speech muscles. We present the results of a preregistered experiment looking at the electromyographic correlates of both overt speech and inner speech production of two phonetic classes of nonwords. An automatic classification approach was undertaken to discriminate between two articulatory features contained in nonwords uttered in both overt and covert speech. Although this approach led to reasonable accuracy rates during overt speech production, it failed to discriminate inner speech phonetic content based on surface electromyography signals. However, exploratory analyses conducted at the individual level revealed that it seemed possible to distinguish between rounded and spread nonwords covertly produced, in two participants. We discuss these results in relation to the existing literature and suggest alternative ways of testing the engagement of the speech motor system during wilful inner speech production.


Subject(s)
Electromyography , Muscle, Skeletal/physiology , Phonetics , Thinking/physiology , Brain/physiology , Female , Humans , Pattern Recognition, Automated , Speech/physiology , Young Adult
9.
Front Psychol ; 10: 2019, 2019.
Article in English | MEDLINE | ID: mdl-31620039

ABSTRACT

Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.

10.
J Speech Lang Hear Res ; 62(5): 1225-1242, 2019 05 21.
Article in English | MEDLINE | ID: mdl-31082309

ABSTRACT

Purpose Bayesian multilevel models are increasingly used to overcome the limitations of frequentist approaches in the analysis of complex structured data. This tutorial introduces Bayesian multilevel modeling for the specific analysis of speech data, using the brms package developed in R. Method In this tutorial, we provide a practical introduction to Bayesian multilevel modeling by reanalyzing a phonetic data set containing formant (F1 and F2) values for 5 vowels of standard Indonesian (ISO 639-3:ind), as spoken by 8 speakers (4 females and 4 males), with several repetitions of each vowel. Results We first give an introductory overview of the Bayesian framework and multilevel modeling. We then show how Bayesian multilevel models can be fitted using the probabilistic programming language Stan and the R package brms, which provides an intuitive formula syntax. Conclusions Through this tutorial, we demonstrate some of the advantages of the Bayesian framework for statistical modeling and provide a detailed case study, with complete source code for full reproducibility of the analyses ( https://osf.io/dpzcb /). Supplemental Material https://doi.org/10.23641/asha.7973822.


Subject(s)
Language , Phonation , Speech , Bayes Theorem , Female , Humans , Indonesia , Male , Multilevel Analysis , Phonetics , Sex Characteristics
11.
Dev Sci ; 22(6): e12830, 2019 11.
Article in English | MEDLINE | ID: mdl-30908771

ABSTRACT

The influence of motor knowledge on speech perception is well established, but the functional role of the motor system is still poorly understood. The present study explores the hypothesis that speech production abilities may help infants discover phonetic categories in the speech stream, in spite of coarticulation effects. To this aim, we examined the influence of babbling abilities on consonant categorization in 6- and 9-month-old infants. Using an intersensory matching procedure, we investigated the infants' capacity to associate auditory information about a consonant in various vowel contexts with visual information about the same consonant, and to map auditory and visual information onto a common phoneme representation. Moreover, a parental questionnaire evaluated the infants' consonantal repertoire. In a first experiment using /b/-/d/ consonants, we found that infants who displayed babbling abilities and produced the /b/ and/or the /d/ consonants in repetitive sequences were able to correctly perform intersensory matching, while non-babblers were not. In a second experiment using the /v/-/z/ pair, which is as visually contrasted as the /b/-/d/ pair but which is usually not produced at the tested ages, no significant matching was observed, for any group of infants, babbling or not. These results demonstrate, for the first time, that the emergence of babbling could play a role in the extraction of vowel-independent representations for consonant place of articulation. They have important implications for speech perception theories, as they highlight the role of sensorimotor interactions in the development of phoneme representations during the first year of life.


Subject(s)
Language Development , Phonetics , Speech Perception/physiology , Child Language , Feedback, Sensory , Female , Humans , Infant , Language , Male
12.
Clin Linguist Phon ; 32(7): 595-621, 2018.
Article in English | MEDLINE | ID: mdl-29148845

ABSTRACT

The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient's awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient's production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.


Subject(s)
Aphasia, Broca/therapy , Language , Neuronal Plasticity , Speech Therapy/methods , Speech/physiology , Feedback, Sensory/physiology , Female , Humans , Lip , Magnetic Resonance Imaging , Tongue
13.
Biol Psychol ; 127: 53-63, 2017 07.
Article in English | MEDLINE | ID: mdl-28465047

ABSTRACT

Rumination is predominantly experienced in the form of repetitive verbal thoughts. Verbal rumination is a particular case of inner speech. According to the Motor Simulation view, inner speech is a kind of motor action, recruiting the speech motor system. In this framework, we predicted an increase in speech muscle activity during rumination as compared to rest. We also predicted increased forehead activity, associated with anxiety during rumination. We measured electromyographic activity over the orbicularis oris superior and inferior, frontalis and flexor carpi radialis muscles. Results showed increased lip and forehead activity after rumination induction compared to an initial relaxed state, together with increased self-reported levels of rumination. Moreover, our data suggest that orofacial relaxation is more effective in reducing rumination than non-orofacial relaxation. Altogether, these results support the hypothesis that verbal rumination involves the speech motor system, and provide a promising psychophysiological index to assess the presence of verbal rumination.


Subject(s)
Electromyography , Facial Muscles/physiology , Rumination, Cognitive/physiology , Speech/physiology , Anxiety/physiopathology , Female , Forehead/physiology , Humans , Lip/physiology , Young Adult
14.
Clin Linguist Phon ; 31(7-9): 598-611, 2017.
Article in English | MEDLINE | ID: mdl-28362227

ABSTRACT

Studies of speech production in French-speaking cochlear-implanted (CI) children are very scarce. Yet, difficulties in speech production have been shown to impact the intelligibility of these children. The goal of this study is to understand the effect of long-term use of cochlear implant on speech production, and more precisely on the coordination of laryngeal-oral gestures in stop production. The participants were all monolingual French children: 13 6;6- to 10;7-year-old CI children and 20 age-matched normally hearing (NH) children. We compared /p/, /t/, /k/, /b/, /d/ and /g/ in word-initial consonant-vowel sequences, produced in isolation in two different tasks, and we studied the effects of CI use, vowel context, task and age factors (i.e. chronological age, age at implantation and duration of implant use). Statistical analyses show a difference in voicing production between groups for voiceless consonants (shorter Voice Onset Times for CI children), with significance reached only for /k/, but no difference for voiced consonants. Our study indicates that in the long run, use of CI seems to have limited effects on the acquisition of oro-laryngeal coordination needed to produce voicing, except for specific difficulties located on velars. In a follow-up study, further acoustic analyses on vowel and fricative production by the same children reveal more difficulties, which suggest that cochlear implantation impacts frequency-based features (second formant of vowels and spectral moments of fricatives) more than durational cues (voicing).


Subject(s)
Acoustic Stimulation , Cochlear Implants , Speech Discrimination Tests , Voice , Child , Cochlear Implantation , Female , France , Humans , Language , Male , Phonetics
15.
Br J Psychol ; 108(1): 31-33, 2017 Feb.
Article in English | MEDLINE | ID: mdl-28059459

ABSTRACT

This review of the literature on the emergence of language describes two opposing views of phonological development, the sound-based versus the whole-word-based accounts. An integrative model is proposed which claims that learning sublexical speech sounds and producing wordlike vocalizations are in fact parallel processes that feed each other during language development. We argue that this model might find unexpected support from the face processing literature.


Subject(s)
Language Development , Learning , Phonetics , Speech Perception , Humans
16.
Infancy ; 20(6): 661-674, 2015 Dec 01.
Article in English | MEDLINE | ID: mdl-26561475

ABSTRACT

One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.

17.
Schizophr Bull ; 41(1): 259-67, 2015 Jan.
Article in English | MEDLINE | ID: mdl-24553150

ABSTRACT

BACKGROUND: Task-based functional neuroimaging studies of schizophrenia have not yet replicated the increased coordinated hyperactivity in speech-related brain regions that is reported with symptom-capture and resting-state studies of hallucinations. This may be due to suboptimal selection of cognitive tasks. METHODS: In the current study, we used a task that allowed experimental manipulation of control over verbal material and compared brain activity between 23 schizophrenia patients (10 hallucinators, 13 nonhallucinators), 22 psychiatric (bipolar), and 27 healthy controls. Two conditions were presented, one involving inner verbal thought (in which control over verbal material was required) and another involving speech perception (SP; in which control verbal material was not required). RESULTS: A functional connectivity analysis resulted in a left-dominant temporal-frontal network that included speech-related auditory and motor regions and showed hypercoupling in past-week hallucinating schizophrenia patients (relative to nonhallucinating patients) during SP only. CONCLUSIONS: These findings replicate our previous work showing generalized speech-related functional network hypercoupling in schizophrenia during inner verbal thought and SP, but extend them by suggesting that hypercoupling is related to past-week hallucination severity scores during SP only, when control over verbal material is not required. This result opens the possibility that practicing control over inner verbal thought processes may decrease the likelihood or severity of hallucinations.


Subject(s)
Frontal Lobe/physiopathology , Functional Laterality/physiology , Hallucinations/physiopathology , Neural Pathways/physiopathology , Schizophrenia/physiopathology , Schizophrenic Psychology , Speech Perception/physiology , Temporal Lobe/physiopathology , Adult , Bipolar Disorder/physiopathology , Brain/physiopathology , Brain Mapping , Case-Control Studies , Female , Functional Neuroimaging , Hallucinations/etiology , Hallucinations/psychology , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Schizophrenia/complications , Young Adult
18.
Infant Behav Dev ; 37(4): 644-51, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25238663

ABSTRACT

The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio-visual fluent speech in 12-month-old infants. German-learning infants' audio-visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.


Subject(s)
Speech Perception/physiology , Acoustic Stimulation , Adult , Auditory Perception/physiology , Female , Humans , Infant , Language , Male , Photic Stimulation , Visual Perception/physiology
19.
PLoS One ; 9(2): e89275, 2014.
Article in English | MEDLINE | ID: mdl-24586651

ABSTRACT

The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.


Subject(s)
Association Learning , Auditory Perception/physiology , Discrimination Learning , Language , Speech/physiology , Visual Perception/physiology , Acoustic Stimulation , Child Development , Female , France , Germany , Humans , Infant , Language Development , Male
20.
Hum Brain Mapp ; 34(10): 2574-91, 2013 Oct.
Article in English | MEDLINE | ID: mdl-22488985

ABSTRACT

This functional magnetic resonance imaging (fMRI) study aimed at examining the cerebral regions involved in the auditory perception of prosodic focus using a natural focus detection task. Two conditions testing the processing of simple utterances in French were explored, narrow-focused versus broad-focused. Participants performed a correction detection task. The utterances in both conditions had exactly the same segmental, lexical, and syntactic contents, and only differed in their prosodic realization. The comparison between the two conditions therefore allowed us to examine processes strictly associated with prosodic focus processing. To assess the specific effect of pitch on hemispheric specialization, a parametric analysis was conducted using a parameter reflecting pitch variations specifically related to focus. The comparison between the two conditions reveals that brain regions recruited during the detection of contrastive prosodic focus can be described as a right-hemisphere dominant dual network consisting of (a) ventral regions which include the right posterosuperior temporal and bilateral middle temporal gyri and (b) dorsal regions including the bilateral inferior frontal, inferior parietal and left superior parietal gyri. Our results argue for a dual stream model of focus perception compatible with the asymmetric sampling in time hypothesis. They suggest that the detection of prosodic focus involves an interplay between the right and left hemispheres, in which the computation of slowly changing prosodic cues in the right hemisphere dynamically feeds an internal model concurrently used by the left hemisphere, which carries out computations over shorter temporal windows.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Language , Magnetic Resonance Imaging , Speech Perception/physiology , Adult , Cues , Dominance, Cerebral/physiology , Female , Humans , Male , Models, Neurological , Models, Psychological , Nerve Net/physiology , Phonation , Pitch Discrimination/physiology , Pitch Perception/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...