Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Infancy ; 26(4): 647-659, 2021 07.
Article in English | MEDLINE | ID: mdl-33988894

ABSTRACT

During their first year, infants attune to the faces and language(s) that are frequent in their environment. The present study investigates the impact of language familiarity on how French-learning 9- and 12-month-olds recognize own-race faces. In Experiment 1, infants were familiarized with the talking face of a Caucasian bilingual German-French speaker reciting a nursery rhyme in French (native condition) or in German (non-native condition). In the test phase, infants' face recognition was tested by presenting a picture of the speaker's face they were familiarized with, side by side with a novel face. At 9 and 12 months, neither infants in the native condition nor the ones in the non-native condition clearly recognized the speaker's face. In Experiment 2, we familiarized infants with the still picture of the speaker's face, along with the auditory speech stream. This time, both 9- and 12-month-olds recognized the face of the speaker they had been familiarized with, but only if she spoke in their native language. This study shows that at least from 9 months of age, language modulates the way faces are recognized.


Subject(s)
Child Development , Facial Recognition , Language , Recognition, Psychology , Female , Humans , Infant , Male
2.
Clin Linguist Phon ; 35(3): 253-276, 2021 03 04.
Article in English | MEDLINE | ID: mdl-32567986

ABSTRACT

Recent studies on the remediation of speech disorders suggest that providing visual information of speech articulators may contribute to improve speech production. In this study, we evaluate the effectiveness of an illustration-based rehabilitation method on speech recovery of a patient with non-fluent chronic aphasia. The Ultraspeech-player software allowed visualization by the patient of reference tongue and lip movements recorded using ultrasound and video imaging. This method can improve the patient's awareness of their own lingual and labial movements, which can increase the ability to coordinate and combine articulatory gestures. The effects of this method were assessed by analyzing performance during speech tasks, the phonological processes identified in the errors made during the phoneme repetition task and the acoustic parameters derived from the speech signal. We also evaluated cognitive performance before and after rehabilitation. The integrity of visuospatial ability, short-term and working memory and some executive functions supports the effectiveness of the rehabilitation method. Our results showed that illustration-based rehabilitation technique had a beneficial effect on the patient's speech production, especially for stop and fricative consonants which are targeted (high visibility of speech articulator configurations) by the software, but also on reading abilities. Acoustic parameters indicated an improvement in the distinction between consonant categories: voiced and voiceless stops or alveolar, post-alveolar and labiodental fricatives. However, the patient showed little improvement for vowels. These results confirmed the advantage of using illustration-based rehabilitation technique and the necessity of detailed subjective and objective intra-speaker evaluation in speech production to fully evaluate speech abilities.


Subject(s)
Aphasia , Dental Articulators , Humans , Phonetics , Speech , Speech Production Measurement , Speech Therapy
3.
J Exp Child Psychol ; 196: 104859, 2020 08.
Article in English | MEDLINE | ID: mdl-32408989

ABSTRACT

In the context of word learning, it is commonly assumed that repetition is required for young children to form and maintain in memory an association between a novel word and its corresponding object. For instance, at 2 years of age, children are able to disambiguate word-related situations in one shot but are not able to further retain this newly acquired knowledge. It has been proposed that multiple fast-mapping experiences would be required to promote word retention or that the inferential reasoning needs to be accompanied by explicit labeling of the target. We hypothesized that when 2-year-olds simply encounter an unambiguous learning context, word learning may be fast and maintained in time. We also assumed that, under this condition, even a single exposure to an object would be sufficient to form a memory trace of its name that would survive a delay. To test these hypotheses, 2- and 4-year-olds were ostensively taught three arbitrary word-object pairs using a 15-s video sequence during which each object was manually displayed and labeled three times in a row. Retention was measured after a 30-min distractive period using a forced-choice procedure. Our results provide evidence that declarative memory does not need repetition to be formed and maintained, for at least a 30-min period, by children as young as 2 years. This finding suggests that the mechanisms required for extremely rapid and robust word acquisition not only are present in preschoolers with developed language and cognitive skills but also are already operative at a younger age.


Subject(s)
Language Development , Mental Recall/physiology , Verbal Learning/physiology , Child, Preschool , Female , Humans , Male , Neuropsychological Tests
4.
PLoS One ; 12(1): e0169325, 2017.
Article in English | MEDLINE | ID: mdl-28060872

ABSTRACT

Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.


Subject(s)
Auditory Perception , Speech , Visual Perception , Voice , Acoustic Stimulation , Adult , Child Development , Female , Hearing , Humans , Infant , Male , Photic Stimulation , Speech Perception
5.
Child Dev Perspect ; 8(2): 65-70, 2014 Jun.
Article in English | MEDLINE | ID: mdl-25254069

ABSTRACT

From the beginning of life, face and language processing are crucial for establishing social communication. Studies on the development of systems for processing faces and language have yielded such similarities as perceptual narrowing across both domains. In this article, we review several functions of human communication, and then describe how the tools used to accomplish those functions are modified by perceptual narrowing. We conclude that narrowing is common to all forms of social communication. We argue that during evolution, social communication engaged different perceptual and cognitive systems-face, facial expression, gesture, vocalization, sound, and oral language-that emerged at different times. These systems are interactive and linked to some extent. In this framework, narrowing can be viewed as a way infants adapt to their native social group.

6.
Lang Speech ; 52(Pt 2-3): 177-206, 2009.
Article in English | MEDLINE | ID: mdl-19624029

ABSTRACT

Prosodic contrastive focus is used to attract the listener's attention to a specific part of the utterance. Mostly conceived of as auditory/acoustic, it also has visible correlates which have been shown to be perceived. This study aimed at analyzing auditory-visual perception of prosodic focus by elaborating a paradigm enabling an auditory-visual advantage measurement (avoiding the ceiling effect) and by examining the interaction between audition and vision. A first experiment proved the efficiency of a whispered speech paradigm to measure an auditory-visual advantage for the perception of prosodic features. A second experiment used this paradigm to examine and characterize the auditory-visual perceptual processes. It combined performance assessment (focus detection score) to reaction time measurements and confirmed and extended the results from the first experiment. This study showed that adding vision to audition for perception of prosodic focus can not only improve focus detection but also reduce reaction times. A further analysis suggested that audition and vision are actually integrated for the perception of prosodic focus. Visual-only perception appeared to be facilitated for whispered speech suggesting an enhancement of visual cues in whispering. Moreover, the potential influence of the presence of facial markers on perception is discussed.


Subject(s)
Auditory Perception , Face , Speech Perception , Visual Perception , Adult , Analysis of Variance , Cues , Female , Humans , Male , Middle Aged , Psycholinguistics , Reaction Time , Speech , Speech Acoustics , Surveys and Questionnaires , Young Adult
7.
Percept Psychophys ; 68(3): 458-74, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16900837

ABSTRACT

Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as the verbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech production constraints may participate in the emergence of verbal representations. With this goal in mind, we examined whether variations in the articulatory cohesion of repeated nonsense words--specifically, temporal relationships between articulatory events--could lead to perceptual asymmetries in verbal transformations. The first experiment displayed variations in timing relations between two consonantal gestures embedded in various nonsense syllables in a repetitive speech production task. In the second experiment, French participants repeatedly uttered these syllables while searching for verbal transformation. Syllable transformation frequencies followed the temporal clustering between consonantal gestures: The more synchronized the gestures, the more stable and attractive the syllable. In the third experiment, which involved a covert repetition mode, the pattern was maintained without external speech movements. However, when a purely perceptual condition was used in a fourth experiment, the previously observed perceptual asymmetries of verbal transformations disappeared. These experiments demonstrate the existence of an asymmetric bias in the verbal transformation effect linked to articulatory control constraints. The persistence of this effect from an overt to a covert repetition procedure provides evidence that articulatory stability constraints originating from the action system may be involved in auditory imagery. The absence of the asymmetric bias during a purely auditory procedure rules out perceptual mechanisms as a possible explanation of the observed asymmetries.


Subject(s)
Phonetics , Speech Perception , Verbal Behavior , Humans , Illusions , Sound Spectrography , Speech
8.
Neuroimage ; 23(3): 1143-51, 2004 Nov.
Article in English | MEDLINE | ID: mdl-15528113

ABSTRACT

We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.


Subject(s)
Brain/physiology , Speech/physiology , Verbal Behavior/physiology , Adult , Echo-Planar Imaging , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Memory, Short-Term/physiology , Models, Statistical , Psychomotor Performance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...