Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
Front Psychol ; 15: 1393836, 2024.
Article in English | MEDLINE | ID: mdl-38813567

ABSTRACT

Introduction: In bilingual communities, knowing the language each speaker uses may support language separation and, later, guide language use in a context-appropriate manner. Previous research has shown that infants begin to form primary associations between the face and the language used by a speaker around the age of 3 months. However, there is still a limited understanding of how robust these associations are and whether they are influenced by the linguistic background of the infant. To answer these questions, this study explores monolingual and bilingual infants' ability to form face-language associations throughout the first year of life. Methods: A group of 4-, 6-, and 10-month-old Spanish and/or Catalan monolingual and bilingual infants were tested in an eye-tracking preferential-looking paradigm (N = 156). After the infants were familiarized with videos of a Catalan and a Spanish speaker, they were tested in two types of test trials with different task demands. First, a Silent test trial assessed primary face-language associations by measuring infants' visual preference for the speakers based on the language they had previously used. Then, two Language test trials assessed more robust face-language associations by measuring infants' ability to match the face of each speaker with their corresponding language. Results: When measuring primary face-language associations, both monolingual and bilingual infants exhibited language-based preferences according to their specific exposure to the languages. Interestingly, this preference varied with age, with a transition from an initial familiarity preference to a novelty preference in older infants. Four-month-old infants showed a preference for the speaker who used their native/dominant language, while 10-month-old infants preferred the speaker who used their non-native/non-dominant language. When measuring more robust face-language associations, infants did not demonstrate signs of consistently matching the faces of the speakers with the language they had previously used, regardless of age or linguistic background. Discussion: Overall, the results indicate that while both monolingual and bilingual infants before the first year of life can form primary face-language associations, these associations remain fragile as infants seemed unable to maintain them when tested in a more demanding task.

2.
Dev Psychol ; 60(1): 135-143, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37917490

ABSTRACT

We presented 28 Spanish monolingual and 28 Catalan-Spanish close-language bilingual 5-year-old children with a video of a talker speaking in the children's native language and a nonnative language and examined the temporal dynamics of their selective attention to the talker's eyes and mouth. When the talker spoke in the children's native language, monolinguals attended equally to the eyes and mouth throughout the trial, whereas close-language bilinguals first attended more to the mouth and then distributed attention equally between the eyes and mouth. In contrast, when the talker spoke in a nonnative language (English), both monolinguals and bilinguals initially attended more to the mouth and then gradually shifted to a pattern of equal attention to the eyes and mouth. These results indicate that specific early linguistic experience has differential effects on young children's deployment of selective attention to areas of a talker's face during the initial part of an audiovisual utterance. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Multilingualism , Speech Perception , Humans , Child, Preschool , Language , Language Development , Linguistics
3.
Sci Adv ; 9(15): eade4083, 2023 04 14.
Article in English | MEDLINE | ID: mdl-37043570

ABSTRACT

In language, grammatical dependencies often hold between items that are not immediately adjacent to each other. Acquiring these nonadjacent dependencies is crucial for learning grammar. However, there are potentially infinitely many dependencies in the language input. How does the infant brain solve this computational learning problem? Here, we demonstrate that while rudimentary sensitivity to nonadjacent regularities may be present relatively early, robust and reliable learning can only be achieved when convergent statistical and perceptual, specifically prosodic cues, are both present, helping the infant brain detect the building blocks that form a nonadjacent dependency. This study contributes to our understanding of the neural foundations of rule learning that pave the way for language acquisition.


Subject(s)
Cues , Learning , Humans , Infant , Language Development , Language , Linguistics
4.
Infancy ; 27(5): 963-971, 2022 09.
Article in English | MEDLINE | ID: mdl-35833310

ABSTRACT

Infants start tracking auditory-only non-adjacent dependencies (NAD) between 15 and 18 months of age. Given that audiovisual speech, normally available in a talker's mouth, is perceptually more salient than auditory speech and that it facilitates speech processing and language acquisition, we investigated whether 15-month-old infants' NAD learning is modulated by attention to a talker's mouth. Infants performed an audiovisual NAD learning task while we recorded their selective attention to the eyes, mouth, and face of an actress while she spoke an artificial language that followed an AXB structure (tis-X-bun; nal-X-gor) during familiarization. At test, the actress spoke the same language (grammatical trials; tis-X-bun; nal-X-gor) or a novel one that violated the AXB structure (ungrammatical trials; tis-X-gor; nal-X-bun). Overall, total duration of looking did not differ during the familiar and novel test trials but the time-course of selective attention to the talker's face and mouth revealed that the novel trials maintained infants' attention to the face more than did the familiar trials. Crucially, attention to the mouth increased during the novel test trials while it did not change during the familiar test trials. These results indicate that the multisensory redundancy of audiovisual speech facilitates infants' discrimination of non-adjacent dependencies.


Subject(s)
Language Development , Speech Perception , Female , Humans , Infant , Language , Speech
5.
J Exp Child Psychol ; 206: 105070, 2021 06.
Article in English | MEDLINE | ID: mdl-33601290

ABSTRACT

Temporal expectations critically influence perception and action. Previous research reports contradictory results in children's ability to endogenously orient attention in time as well as the developmental course. To reconcile this seemingly conflicting evidence, we put forward the hypothesis that expectancy violations-through the use of invalid trials-are the source of the mixed evidence reported in the literature. With the aim of offering new results that could reconcile previous findings, we tested a group of young children (4- to 7-year-olds), an older group (8- to 12-year-olds), and a group of adults. Temporal cues provided expectations about target onset time, and invalid trials were used such that the target appeared at the unexpected time in 25% of the trials. In both experiments, the younger children responded faster in valid trials than in invalid trials, showing that they benefited from the temporal cue. These results show that young children rely on temporal expectations to orient attention in time endogenously. Importantly, younger children exhibited greater validity effects than older children and adults, and these effects correlated positively with participants' performance in the invalid (unexpected) trials. We interpret the reduction of validity effects with age as an index of better adaptation to the invalid (unexpected) condition. By using invalid trials and testing three age groups, we demonstrate that previous findings are not inconsistent. Rather, evidence converges when considering the presence of expectancy violations that require executive control mechanisms, which develop progressively during childhood. We propose a distinction between rigid and flexible mechanisms of temporal orienting to accommodate all findings.


Subject(s)
Attention , Cues , Adolescent , Adult , Child , Child, Preschool , Executive Function , Humans , Reaction Time
6.
Infant Behav Dev ; 54: 80-84, 2019 02.
Article in English | MEDLINE | ID: mdl-30634137

ABSTRACT

We investigated whether attention to a talker's eyes in 12 month-old infants is related to their communication and social abilities. We measured infant attention to a talker's eyes and mouth with a Tobii eye-tracker and examined the correlation between attention to the talker's eyes and scores on the Adaptive Behavior Questionnaire from the Bayley Scales of Infant and Toddler Development (BSID-III). Results indicated a positive relationship between eye gaze and scores on the Social and Communication subscales of the BSID-III.


Subject(s)
Attention/physiology , Communication , Facial Expression , Fixation, Ocular/physiology , Social Skills , Speech Perception/physiology , Female , Humans , Infant , Male , Photic Stimulation/methods
7.
Dev Sci ; 22(3): e12755, 2019 05.
Article in English | MEDLINE | ID: mdl-30251757

ABSTRACT

Previous findings indicate that bilingual Catalan/Spanish-learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages. This raises the possibility that bilinguals only rely on redundant audiovisual cues when their languages are close. To test this possibility, we exposed 15-month-old and 4- to 6-year-old close-language bilinguals (Spanish/Catalan) and distant-language bilinguals (Spanish/"other") to videos of a talker uttering Spanish or Catalan (native) and English (non-native) monologues and recorded eye-gaze to the talker's eyes and mouth. At both ages, the close-language bilinguals attended more to the talker's mouth than the distant-language bilinguals. This indicates that language proximity modulates selective attention to a talker's mouth during early childhood and suggests that reliance on the greater salience of audiovisual speech cues depends on the difficulty of the speech-processing task.


Subject(s)
Attention/physiology , Cues , Fixation, Ocular/physiology , Mouth/physiology , Multilingualism , Child , Child, Preschool , Eye , Female , Humans , Infant , Language , Language Development , Male , Speech Perception
8.
PLoS One ; 13(1): e0190734, 2018.
Article in English | MEDLINE | ID: mdl-29293669

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0184698.].

9.
PLoS One ; 12(9): e0184698, 2017.
Article in English | MEDLINE | ID: mdl-28886169

ABSTRACT

Anticipating both where and when an object will appear is a critical ability for adaptation. Research in the temporal domain in adults indicate that dissociable mechanisms relate to endogenous attention driven by the properties of the stimulus themselves (e.g. rhythmic, sequential, or trajectory cues) and driven by symbolic cues. In infancy, we know that the capacity to endogenously orient attention progressively develops through infancy. However, the above-mentioned distinction has not yet been explored since previous studies involved stimulus-driven cues. The current study tested 12- and 15-month-olds in an adaptation of the anticipatory eye movement procedure to determine whether infants were able to anticipate a specific location and temporal interval predicted only by symbolic pre-cues. In the absence of stimulus-driven cues, results show that only 15-month-olds could show anticipatory behavior based on the temporal information provided by the symbolic cues. Distinguishing stimulus-driven expectations from those driven by symbolic cues allowed dissecting more clearly the developmental progression of temporal endogenous attention.


Subject(s)
Cues , Analysis of Variance , Attention/physiology , Eye Movements/physiology , Female , Humans , Infant , Male , Reaction Time/physiology , Visual Perception/physiology
10.
Iperception ; 8(3): 2041669517716183, 2017.
Article in English | MEDLINE | ID: mdl-28694959

ABSTRACT

Higher frequency and louder sounds are associated with higher positions whereas lower frequency and quieter sounds are associated with lower locations. In English, "high" and "low" are used to label pitch, loudness, and spatial verticality. By contrast, different words are preferentially used, in Catalan and Spanish, for pitch (high: "agut/agudo"; low: "greu/grave") and for loudness/verticality (high: "alt/alto"; low: "baix/bajo"). Thus, English and Catalan/Spanish differ in the spatial connotations for pitch. To analyze the influence of language on these crossmodal associations, a task was conducted in which English and Spanish/Catalan speakers had to judge whether a tone was higher or lower (in pitch or loudness) than a reference tone. The response buttons were located at crossmodally congruent or incongruent positions with respect to the probe tone. Crossmodal correspondences were evidenced in both language groups. However, English speakers showed greater effects for pitch, suggesting an influence of linguistic background.

11.
Front Psychol ; 7: 44, 2016.
Article in English | MEDLINE | ID: mdl-26869953

ABSTRACT

Language is one of the most fascinating abilities that humans possess. Infants demonstrate an amazing repertoire of linguistic abilities from very early on and reach an adult-like form incredibly fast. However, language is not acquired all at once but in an incremental fashion. In this article we propose that the attentional system may be one of the sources for this developmental trajectory in language acquisition. At birth, infants are endowed with an attentional system fully driven by salient stimuli in their environment, such as prosodic information (e.g., rhythm or pitch). Early stages of language acquisition could benefit from this readily available, stimulus-driven attention to simplify the complex speech input and allow word segmentation. At later stages of development, infants are progressively able to selectively attend to specific elements while disregarding others. This attentional ability could allow them to learn distant non-adjacent rules needed for morphosyntactic acquisition. Because non-adjacent dependencies occur at distant moments in time, learning these dependencies may require correctly orienting attention in the temporal domain. Here, we gather evidence uncovering the intimate relationship between the development of attention and language. We aim to provide a novel approach to human development, bridging together temporal attention and language acquisition.

12.
Psychol Sci ; 26(4): 490-8, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25767208

ABSTRACT

Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants.


Subject(s)
Attention , Language Development , Multilingualism , Speech Perception , Cues , Face/anatomy & histology , Female , Humans , Infant , Male , Mouth/anatomy & histology
13.
Infant Behav Dev ; 38: 126-9, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25656953

ABSTRACT

This study investigated the sensitivity of 9-month-old infants to the alignment between prosodic and gesture prominences in pointing-speech combinations. Results revealed that the perception of prominence is multimodal and that infants are aware of the timing of gesture-speech combinations well before they can produce them.


Subject(s)
Language Development , Nonverbal Communication , Speech Acoustics , Speech Perception , Association Learning , Attention , Comprehension , Female , Gestures , Humans , Infant , Male , Speech , Time Factors
14.
Infant Behav Dev ; 38: 77-81, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25617593

ABSTRACT

We examined 4- and 6-month-old infants' sensitivity to the perceptual association between pitch and object size. Crossmodal correspondence effects were observed in 6-month-old infants but not in younger infants, suggesting that experience and/or further maturation is needed to fully develop this crossmodal association.


Subject(s)
Association Learning , Pattern Recognition, Visual , Pitch Discrimination , Psychology, Child , Size Perception , Age Factors , Attention , Child Development , Color Perception , Female , Humans , Infant , Male , Motion Perception
15.
Acta Psychol (Amst) ; 149: 142-7, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24576508

ABSTRACT

We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.


Subject(s)
Language , Recognition, Psychology/physiology , Speech Perception/physiology , Visual Perception/physiology , Auditory Perception/physiology , Female , Hearing/physiology , Humans , Infant , Male , Spain , Speech
16.
J Child Lang ; 40(3): 687-700, 2013 Jun.
Article in English | MEDLINE | ID: mdl-22874648

ABSTRACT

Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.


Subject(s)
Speech Disorders/psychology , Speech Perception , Auditory Perception , Case-Control Studies , Child , Child, Preschool , Eye Movement Measurements , Female , Humans , Male , Spain , Video Recording , Visual Perception
17.
Int J Behav Dev ; 37(2): 90-94, 2013 Mar 01.
Article in English | MEDLINE | ID: mdl-24648601

ABSTRACT

Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic audio-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we asked whether and when infants become capable of recognizing amodal language identity, a critical perceptual skill that is necessary for the development of multisensory communication. Because, at a minimum, such a skill requires the ability to perceive suprasegmental auditory and visual linguistic information, we predicted that this skill would not emerge before higher-level speech processing and multisensory integration skills emerge. Consistent with this prediction, we found that recognition of the amodal identity of language emerges at 10-12 months of age but that when it emerges it is restricted to infants' native language.

18.
Infant Behav Dev ; 35(4): 815-8, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22982283

ABSTRACT

The present study explored the effects of short-term experience with audiovisual asynchronous stimuli in 6-month-old infants. Results revealed that, in contrast with adults (usually showing temporal recalibration under similar circumstances), a brief exposure to asynchrony increased infants' perceptual sensitivity to audiovisual synchrony.


Subject(s)
Auditory Perception/physiology , Recognition, Psychology/physiology , Visual Perception/physiology , Acoustic Stimulation , Female , Humans , Infant , Male , Photic Stimulation , Reaction Time/physiology
19.
Child Dev ; 83(3): 965-76, 2012.
Article in English | MEDLINE | ID: mdl-22364434

ABSTRACT

Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144 Catalan- and Spanish-learning infants (2 languages with a different distribution of vowel frequency of occurrence) at 4, 6, and 12 months. The results confirmed an acoustic bias at 4 and 6 months in all infants. However, at 12 months, discrimination was not affected by the acoustic bias but by the frequency of occurrence of the vowel.


Subject(s)
Discrimination, Psychological/physiology , Phonetics , Speech Perception/physiology , Acoustic Stimulation , Analysis of Variance , Cues , Female , Habituation, Psychophysiologic/physiology , Humans , Infant , Male , Spain
20.
Psicológica (Valencia, Ed. impr.) ; 33(2): 175-207, 2012. ilus, tab
Article in English | IBECS | ID: ibc-100387

ABSTRACT

Utilizando un lenguaje artificial Maye, Werker, y Gerken (2002) demostraron que las categorías fonéticas de los bebés cambian en función de la distribución de los sonidos del habla. En un estudio reciente, Werker y cols. (2007) observaron que el habla dirigida a bebés (habla maternal) contiene claves acústicas fiables que sustentan el aprendizaje de las categorías vocálicas propias de la lengua: las pistas en inglés eran espectrales y de duración; las pistas en japonés eran exclusivamente de duración. En el presente estudio se amplían estos resultados de dos formas, 1) examinamos una nueva lengua, el catalán, que distingue las vocales únicamente a partir de las diferencias espectrales, y 2) ya que los bebés aprenden también del habla dirigida a los adultos (Oshima-Takane, 1988), analizamos este tipo de habla en las tres lenguas. Los análisis revelaron diferencias considerables en las pistas de cada lengua, e indicaron que, por sí solas, son suficientes para establecer las categorías vocálicas específicas de cada lengua. Esta demostración de las diferencias propias de cada lengua en la distribución de las pistas fonéticas presentes en el habla dirigida al adulto, proporciona evidencia adicional sobre el tipo de pistas que pueden estar usando los bebés cuando establecen sus categorías fonéticas maternas(AU)


Using an artificial language learning manipulation, Maye, Werker, and Gerken (2002) demonstrated that infants’ speech sound categories change as a function of the distributional properties of the input. In a recent study, Werker et al. (2007) showed that Infant-directed Speech (IDS) input contains reliable acoustic cues that support distributional learning of language-specific vowel categories: English cues are spectral and durational; Japanese cues are exclusively durational. In the present study we extend these results in two ways. 1) we examine a language, Catalan, which distinguishes vowels solely on the basis of spectral differences, and 2) because infants learn from overheard adult speech as well as IDS (Oshima- Takane, 1988), we analyze Adult-directed Speech (ADS) in all three languages. Analyses revealed robust differences in the cues of each language, and demonstrated that these cues alone are sufficient to yield language-specific vowel categories. This demonstration of language-specific differences in the distribution of cues to phonetic category structure found in ADS provides additional evidence for the types of cues available to infants to guide their establishment of native phonetic categories(AU)


Subject(s)
Humans , Male , Female , Young Adult , Adult , Phonetics , Articulation Disorders/psychology , Lipreading , Speech/physiology , Evidence-Based Medicine/methods , Acoustic Impedance Tests/methods , Acoustic Impedance Tests , Acoustic Stimulation/psychology , Psychoacoustics , Analysis of Variance , Odds Ratio , Probability , Verbal Behavior/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...