Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Dev Sci ; 27(4): e13483, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38470174

ABSTRACT

Impaired sensorimotor synchronization (SMS) to acoustic rhythm may be a marker of atypical language development. Here, Motion Capture was used to assess gross motor rhythmic movement at six time points between 5- and 11 months of age. Infants were recorded drumming to acoustic stimuli of varying linguistic and temporal complexity: drumbeats, repeated syllables and nursery rhymes. Here we show, for the first time, developmental change in infants' movement timing in response to auditory stimuli over the first year of life. Longitudinal analyses revealed that whilst infants could not yet reliably synchronize their movement to auditory rhythms, infant spontaneous motor tempo became faster with age, and by 11 months, a subset of infants decelerate from their spontaneous motor tempo, which better accords with the incoming tempo. Further, infants became more regular drummers with age, with marked decreases in the variability of spontaneous motor tempo and variability in response to drumbeats. This latter effect was subdued in response to linguistic stimuli. The current work lays the foundation for using individual differences in precursors of SMS in infancy to predict later language outcomes. RESEARCH HIGHLIGHT: We present the first longitudinal investigation of infant rhythmic movement over the first year of life Whilst infants generally move more quickly and with higher regularity over their first year, by 11 months infants begin to counter this pattern when hearing slower infant-directed song Infant movement is more variable to speech than non-speech stimuli In the context of the larger Cambridge UK BabyRhythm Project, we lay the foundation for rhythmic movement in infancy to predict later language outcomes.


Subject(s)
Acoustic Stimulation , Language Development , Speech , Humans , Infant , Longitudinal Studies , Speech/physiology , Female , Male , Child Development/physiology , Movement/physiology , Periodicity , Auditory Perception/physiology
2.
Dev Sci ; 27(4): e13502, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38482775

ABSTRACT

It is known that the rhythms of speech are visible on the face, accurately mirroring changes in the vocal tract. These low-frequency visual temporal movements are tightly correlated with speech output, and both visual speech (e.g., mouth motion) and the acoustic speech amplitude envelope entrain neural oscillations. Low-frequency visual temporal information ('visual prosody') is known from behavioural studies to be perceived by infants, but oscillatory studies are currently lacking. Here we measure cortical tracking of low-frequency visual temporal information by 5- and 8-month-old infants using a rhythmic speech paradigm (repetition of the syllable 'ta' at 2 Hz). Eye-tracking data were collected simultaneously with EEG, enabling computation of cortical tracking and phase angle during visual-only speech presentation. Significantly higher power at the stimulus frequency indicated that cortical tracking occurred across both ages. Further, individual differences in preferred phase to visual speech related to subsequent measures of language acquisition. The difference in phase between visual-only speech and the same speech presented as auditory-visual at 6- and 9-months was also examined. These neural data suggest that individual differences in early language acquisition may be related to the phase of entrainment to visual rhythmic input in infancy. RESEARCH HIGHLIGHTS: Infant preferred phase to visual rhythmic speech predicts language outcomes. Significant cortical tracking of visual speech is present at 5 and 8 months. Phase angle to visual speech at 8 months predicted greater receptive and productive vocabulary at 24 months.


Subject(s)
Language Development , Speech Perception , Speech , Humans , Infant , Male , Female , Speech Perception/physiology , Speech/physiology , Electroencephalography , Individuality , Visual Perception/physiology , Eye-Tracking Technology , Acoustic Stimulation , Photic Stimulation
3.
J Neurosci Methods ; 403: 110036, 2024 03.
Article in English | MEDLINE | ID: mdl-38128783

ABSTRACT

BACKGROUND: Computational models that successfully decode neural activity into speech are increasing in the adult literature, with convolutional neural networks (CNNs), backward linear models, and mutual information (MI) models all being applied to neural data in relation to speech input. This is not the case in the infant literature. NEW METHOD: Three different computational models, two novel for infants, were applied to decode low-frequency speech envelope information. Previously-employed backward linear models were compared to novel CNN and MI-based models. Fifty infants provided EEG recordings when aged 4, 7, and 11 months, while listening passively to natural speech (sung or chanted nursery rhymes) presented by video with a female singer. RESULTS: Each model computed speech information for these nursery rhymes in two different low-frequency bands, delta and theta, thought to provide different types of linguistic information. All three models demonstrated significant levels of performance for delta-band neural activity from 4 months of age, with two of three models also showing significant performance for theta-band activity. All models also demonstrated higher accuracy for the delta-band neural responses. None of the models showed developmental (age-related) effects. COMPARISONS WITH EXISTING METHODS: The data demonstrate that the choice of algorithm used to decode speech envelope information from neural activity in the infant brain determines the developmental conclusions that can be drawn. CONCLUSIONS: The modelling shows that better understanding of the strengths and weaknesses of each modelling approach is fundamental to improving our understanding of how the human brain builds a language system.


Subject(s)
Speech Perception , Speech , Adult , Humans , Female , Infant , Speech/physiology , Electroencephalography , Linear Models , Brain , Neural Networks, Computer , Speech Perception/physiology
4.
Nat Commun ; 14(1): 7789, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-38040720

ABSTRACT

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram.


Subject(s)
Auditory Cortex , Speech Perception , Adult , Infant , Humans , Phonetics , Auditory Cortex/physiology , Speech Perception/physiology , Speech/physiology , Acoustics , Acoustic Stimulation
5.
Brain Lang ; 243: 105301, 2023 08.
Article in English | MEDLINE | ID: mdl-37399686

ABSTRACT

Atypical phase alignment of low-frequency neural oscillations to speech rhythm has been implicated in phonological deficits in developmental dyslexia. Atypical phase alignment to rhythm could thus also characterize infants at risk for later language difficulties. Here, we investigate phase-language mechanisms in a neurotypical infant sample. 122 two-, six- and nine-month-old infants were played speech and non-speech rhythms while EEG was recorded in a longitudinal design. The phase of infants' neural oscillations aligned consistently to the stimuli, with group-level convergence towards a common phase. Individual low-frequency phase alignment related to subsequent measures of language acquisition up to 24 months of age. Accordingly, individual differences in language acquisition are related to the phase alignment of cortical tracking of auditory and audiovisual rhythms in infancy, an automatic neural mechanism. Automatic rhythmic phase-language mechanisms could eventually serve as biomarkers, identifying at-risk infants and enabling intervention at the earliest stages of development.


Subject(s)
Speech Perception , Infant , Humans , Language , Speech , Language Development
6.
Front Neurosci ; 16: 842447, 2022.
Article in English | MEDLINE | ID: mdl-35495026

ABSTRACT

Here we duplicate a neural tracking paradigm, previously published with infants (aged 4 to 11 months), with adult participants, in order to explore potential developmental similarities and differences in entrainment. Adults listened and watched passively as nursery rhymes were sung or chanted in infant-directed speech. Whole-head EEG (128 channels) was recorded, and cortical tracking of the sung speech in the delta (0.5-4 Hz), theta (4-8 Hz) and alpha (8-12 Hz) frequency bands was computed using linear decoders (multivariate Temporal Response Function models, mTRFs). Phase-amplitude coupling (PAC) was also computed to assess whether delta and theta phases temporally organize higher-frequency amplitudes for adults in the same pattern as found in the infant brain. Similar to previous infant participants, the adults showed significant cortical tracking of the sung speech in both delta and theta bands. However, the frequencies associated with peaks in stimulus-induced spectral power (PSD) in the two populations were different. PAC was also different in the adults compared to the infants. PAC was stronger for theta- versus delta- driven coupling in adults but was equal for delta- versus theta-driven coupling in infants. Adults also showed a stimulus-induced increase in low alpha power that was absent in infants. This may suggest adult recruitment of other cognitive processes, possibly related to comprehension or attention. The comparative data suggest that while infant and adult brains utilize essentially the same cortical mechanisms to track linguistic input, the operation of and interplay between these mechanisms may change with age and language experience.

7.
Dev Cogn Neurosci ; 54: 101075, 2022 04.
Article in English | MEDLINE | ID: mdl-35078120

ABSTRACT

Amplitude rise times play a crucial role in the perception of rhythm in speech, and reduced perceptual sensitivity to differences in rise time is related to developmental language difficulties. Amplitude rise times also play a mechanistic role in neural entrainment to the speech amplitude envelope. Using an ERP paradigm, here we examined for the first time whether infants at the ages of seven and eleven months exhibit an auditory mismatch response to changes in the rise times of simple repeating auditory stimuli. We found that infants exhibited a mismatch response (MMR) to all of the oddball rise times used for the study. The MMR was more positive at seven than eleven months of age. At eleven months, there was a shift to a mismatch negativity (MMN) that was more pronounced over left fronto-central electrodes. The MMR over right fronto-central electrodes was sensitive to the size of the difference in rise time. The results indicate that neural processing of changes in rise time is present at seven months, supporting the possibility that early speech processing is facilitated by neural sensitivity to these important acoustic cues.


Subject(s)
Evoked Potentials, Auditory , Speech Perception , Acoustic Stimulation/methods , Electroencephalography , Evoked Potentials, Auditory/physiology , Humans , Infant , Speech , Speech Perception/physiology
8.
Neuroimage ; 247: 118698, 2022 02 15.
Article in English | MEDLINE | ID: mdl-34798233

ABSTRACT

The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantly greater modulation energy than ADS in an amplitude-modulation (AM) band centred on ∼2 Hz. Accordingly, cortical tracking of IDS by delta-band neural signals may be key to language acquisition. Speech also contains acoustic information within its higher-frequency bands (beta, gamma). Adult EEG and MEG studies reveal an oscillatory hierarchy, whereby low-frequency (delta, theta) neural phase dynamics temporally organize the amplitude of high-frequency signals (phase amplitude coupling, PAC). Whilst consensus is growing around the role of PAC in the matured adult brain, its role in the development of speech processing is unexplored. Here, we examined the presence and maturation of low-frequency (<12 Hz) cortical speech tracking in infants by recording EEG longitudinally from 60 participants when aged 4-, 7- and 11- months as they listened to nursery rhymes. After establishing stimulus-related neural signals in delta and theta, cortical tracking at each age was assessed in the delta, theta and alpha [control] bands using a multivariate temporal response function (mTRF) method. Delta-beta, delta-gamma, theta-beta and theta-gamma phase-amplitude coupling (PAC) was also assessed. Significant delta and theta but not alpha tracking was found. Significant PAC was present at all ages, with both delta and theta -driven coupling observed.


Subject(s)
Delta Rhythm/physiology , Speech Perception/physiology , Theta Rhythm/physiology , Acoustic Stimulation , Auditory Cortex/physiology , Brain/physiology , Electroencephalography , Humans , Infant , Longitudinal Studies , United Kingdom
9.
Front Psychol ; 12: 661479, 2021.
Article in English | MEDLINE | ID: mdl-34489784

ABSTRACT

While many studies have shown that toddlers are able to detect syntactic regularities in speech, the learning mechanism allowing them to do this is still largely unclear. In this article, we use computational modeling to assess the plausibility of a context-based learning mechanism for the acquisition of nouns and verbs. We hypothesize that infants can assign basic semantic features, such as "is-an-object" and/or "is-an-action," to the very first words they learn, then use these words, the semantic seed, to ground proto-categories of nouns and verbs. The contexts in which these words occur, would then be exploited to bootstrap the noun and verb categories: unknown words are attributed to the class that has been observed most frequently in the corresponding context. To test our hypothesis, we designed a series of computational experiments which used French corpora of child-directed speech and different sizes of semantic seed. We partitioned these corpora in training and test sets: the model extracted the two-word contexts of the seed from the training sets, then used them to predict the syntactic category of content words from the test sets. This very simple algorithm demonstrated to be highly efficient in a categorization task: even the smallest semantic seed (only 8 nouns and 1 verb known) yields a very high precision (~90% of new nouns; ~80% of new verbs). Recall, in contrast, was low for small seeds, and increased with the seed size. Interestingly, we observed that the contexts used most often by the model featured function words, which is in line with what we know about infants' language development. Crucially, for the learning method we evaluated here, all initialization hypotheses are plausible and fit the developmental literature (semantic seed and ability to analyse contexts). While this experiment cannot prove that this learning mechanism is indeed used by infants, it demonstrates the feasibility of a realistic learning hypothesis, by using an algorithm that relies on very little computational and memory resources. Altogether, this supports the idea that a probabilistic, context-based mechanism can be very efficient for the acquisition of syntactic categories in infants.

10.
Brain Lang ; 220: 104968, 2021 09.
Article in English | MEDLINE | ID: mdl-34111684

ABSTRACT

Currently there are no reliable means of identifying infants at-risk for later language disorders. Infant neural responses to rhythmic stimuli may offer a solution, as neural tracking of rhythm is atypical in children with developmental language disorders. However, infant brain recordings are noisy. As a first step to developing accurate neural biomarkers, we investigate whether infant brain responses to rhythmic stimuli can be classified reliably using EEG from 95 eight-week-old infants listening to natural stimuli (repeated syllables or drumbeats). Both Convolutional Neural Network (CNN) and Support Vector Machine (SVM) approaches were employed. Applied to one infant at a time, the CNN discriminated syllables from drumbeats with a mean AUC of 0.87, against two levels of noise. The SVM classified with AUC 0.95 and 0.86 respectively, showing reduced performance as noise increased. Our proof-of-concept modelling opens the way to the development of clinical biomarkers for language disorders related to rhythmic entrainment.


Subject(s)
Machine Learning , Speech , Child , Electroencephalography , Humans , Infant , Neural Networks, Computer , Support Vector Machine
11.
eNeuro ; 6(5)2019.
Article in English | MEDLINE | ID: mdl-31551251

ABSTRACT

As the evidence of predictive processes playing a role in a wide variety of cognitive domains increases, the brain as a predictive machine becomes a central idea in neuroscience. In auditory processing, a considerable amount of progress has been made using variations of the Oddball design, but most of the existing work seems restricted to predictions based on physical features or conditional rules linking successive stimuli. To characterize the predictive capacity of the brain to abstract rules, we present here two experiments that use speech-like stimuli to overcome limitations and avoid common confounds. Pseudowords were presented in isolation, intermixed with infrequent deviants that contained unexpected phoneme sequences. As hypothesized, the occurrence of unexpected sequences of phonemes reliably elicited an early prediction error signal. These prediction error signals do not seemed to be modulated by attentional manipulations due to different task instructions, suggesting that the predictions are deployed even when the task at hand does not volitionally involve error detection. In contrast, the amount of syllables congruent with a standard pseudoword presented before the point of deviance exerted a strong modulation. Prediction error's amplitude doubled when two congruent syllables were presented instead of one, despite keeping local transitional probabilities constant. This suggests that auditory predictions can be built integrating information beyond the immediate past. In sum, the results presented here further contribute to the understanding of the predictive capabilities of the human auditory system when facing complex stimuli and abstract rules.


Subject(s)
Acoustic Stimulation/methods , Electroencephalography/methods , Phonetics , Speech Perception/physiology , Speech/physiology , Adult , Female , Humans , Male , Photic Stimulation/methods , Young Adult
12.
Dev Sci ; 22(4): e12802, 2019 07.
Article in English | MEDLINE | ID: mdl-30681763

ABSTRACT

Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364(1536), 3617-3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109(9), 3253-3258, 2012; Jusczyk & Aslin, Cogn Psychol 29, 1-23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co-occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near-infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.


Subject(s)
Cues , Language Development , Speech Perception/physiology , Speech/physiology , Brain/physiology , Child Language , Female , Humans , Infant , Infant, Newborn , Learning , Linguistics , Male , Memory , Spectroscopy, Near-Infrared
13.
Dev Cogn Neurosci ; 26: 45-51, 2017 08.
Article in English | MEDLINE | ID: mdl-28499139

ABSTRACT

By the end of their first year of life, infants have become experts in discriminating the sounds of their native language, while they have lost the ability to discriminate non-native contrasts. This type of phonetic learning is referred to as perceptual attunement. In the present study, we investigated the emergence of a context-dependent form of perceptual attunement in infancy. Indeed, some native contrasts are not discriminated in certain phonological contexts by adults, due to the presence of a language-specific process that neutralizes the contrasts in those contexts. We used a mismatch design and recorded high-density Electroencephalography (EEG) in French-learning 14-month-olds. Our results show that similarly to French adults, infants fail to discriminate a native voicing contrast (e.g., [f] vs. [v]) when it occurs in a specific phonological context (e.g. [ofbe] vs. [ovbe], no mismatch response), while they successfully detected it in other phonological contexts (e.g., [ofne] vs. [ovne], mismatch response). The present results demonstrate for the first time that by the age of 14 months, infants' phonetic learning does not only rely on the processing of individual sounds, but also takes into account in a language-specific manner the phonological contexts in which these sounds occur.


Subject(s)
Electroencephalography/methods , Speech Perception/physiology , Female , Humans , Infant , Language Development , Male
14.
Neuropsychologia ; 98: 4-12, 2017 04.
Article in English | MEDLINE | ID: mdl-27544044

ABSTRACT

To comprehend language, listeners need to encode the relationship between words within sentences. This entails categorizing words into their appropriate word classes. Function words, consistently preceding words from specific categories (e.g., the ballNOUN, I speakVERB), provide invaluable information for this task, and children's sensitivity to such adjacent relationships develops early on in life. However, neighboring words are not the sole source of information regarding an item's word class. Here we examine whether young children also take into account preceding sentence context online during syntactic categorization. To address this question, we use the ambiguous French function word la which, depending on sentence context, can either be used as determiner (the, preceding nouns) or as object clitic (it, preceding verbs). French-learning 18-month-olds' evoked potentials (ERPs) were recorded while they listened to sentences featuring this ambiguous function word followed by either a noun or a verb (thus yielding a locally felicitous co-occurrence of la + noun or la + verb). Crucially, preceding sentence context rendered the sentence either grammatical or ungrammatical. Ungrammatical sentences elicited a late positivity (resembling a P600) that was not observed for grammatical sentences. Toddlers' analysis of the unfolding sentence was thus not limited to local co-occurrences, but rather took into account non-adjacent sentence context. These findings suggest that by 18 months of age, online word categorization is already surprisingly robust. This could be greatly beneficial for the acquisition of novel words.


Subject(s)
Comprehension/physiology , Evoked Potentials/physiology , Language Development , Semantics , Vocabulary , Acoustic Stimulation , Analysis of Variance , Brain Mapping , Electroencephalography , Female , Humans , Infant , Male
15.
Dev Cogn Neurosci ; 19: 164-73, 2016 06.
Article in English | MEDLINE | ID: mdl-27038839

ABSTRACT

Syntax allows human beings to build an infinite number of sentences from a finite number of words. How this unique, productive power of human language unfolds over the course of language development is still hotly debated. When they listen to sentences comprising newly-learned words, do children generalize from their knowledge of the legal combinations of word categories or do they instead rely on strings of words stored in memory to detect syntactic errors? Using novel words taught in the lab, we recorded Evoked Response Potentials (ERPs) in two-year-olds and adults listening to grammatical and ungrammatical sentences containing syntactic contexts that had not been used during training. In toddlers, the ungrammatical use of words, even when they have been just learned, induced an early left anterior negativity (surfacing 100-400ms after target word onset) followed by a late posterior positivity (surfacing 700-900ms after target word onset) that was not observed in grammatical sentences. This late effect was remarkably similar to the P600 displayed by adults, suggesting that toddlers and adults perform similar syntactic computations. Our results thus show that toddlers build on-line expectations regarding the syntactic category of upcoming words in a sentence.


Subject(s)
Evoked Potentials, Auditory/physiology , Language Development , Semantics , Verbal Learning/physiology , Vocabulary , Adolescent , Adult , Auditory Perception/physiology , Brain Mapping/methods , Child, Preschool , Electroencephalography/methods , Female , Humans , Male , Memory/physiology , Photic Stimulation/methods , Young Adult
16.
Dev Sci ; 19(3): 488-503, 2016 May.
Article in English | MEDLINE | ID: mdl-26190466

ABSTRACT

To understand language, humans must encode information from rapid, sequential streams of syllables - tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences. After familiarization with a six-syllable sequence, the neonate brain responded to the change (as shown by an increase in oxy-hemoglobin) when the two edge syllables switched positions but not when two middle syllables switched positions (Experiment 1), indicating that they encoded the syllables at the edges of sequences better than those in the middle. Moreover, when a 25 ms pause was inserted between the middle syllables as a segmentation cue, neonates' brains were sensitive to the change (Experiment 2), indicating that subtle cues in speech can signal a boundary, with enhanced encoding of the syllables located at the edges of that boundary. These findings suggest that neonates' brains can encode information from multisyllabic sequences and that this encoding is constrained. Moreover, subtle segmentation cues in a sequence of syllables provide a mechanism with which to accurately encode positional information from longer sequences. Tracking the order of syllables is necessary to understand language and our results suggest that the foundations for this encoding are present at birth.


Subject(s)
Child Language , Language , Phonetics , Speech Perception/physiology , Brain/blood supply , Brain/physiology , Cues , Female , Humans , Infant, Newborn , Male , Oxyhemoglobins/analysis , Spectroscopy, Near-Infrared
17.
Front Psychol ; 6: 1841, 2015.
Article in English | MEDLINE | ID: mdl-26696917

ABSTRACT

Many experiments have shown that listeners actively build expectations about up-coming words, rather than simply waiting for information to accumulate. The online construction of a syntactic structure is one of the cues that listeners may use to construct strong expectations about the possible words they will be exposed to. For example, speakers of verb-final languages use pre-verbal arguments to predict on-line the kind of arguments that are likely to occur next (e.g., Kamide, 2008, for a review). Although in SVO languages information about a verb's arguments typically follows the verb, some languages use pre-verbal object pronouns, potentially allowing listeners to build on-line expectations about the nature of the upcoming verb. For instance, if a pre-verbal direct object pronoun is heard, then the following verb has to be able to enter a transitive structure, thus excluding intransitive verbs. To test this, we used French, in which object pronouns have to appear pre-verbally, to investigate whether listeners use this cue to predict the occurrence of a transitive verb. In a word detection task, we measured the number of false alarms to sentences that contained a transitive verb whose first syllable was homophonous to the target monosyllabic verb (e.g., target "dort" /dɔʁ/ to sleep and false alarm verb "dorlote" /dɔʁlɔt/ to cuddle). The crucial comparison involved two sentence types, one without a pre-verbal object clitic, for which an intransitive verb was temporarily a plausible option (e.g., "Il dorlote" / He cuddles) and the other with a pre-verbal object clitic, that made the appearance of an intransitive verb impossible ("Il le dorlote" / He cuddles it). Results showed a lower rate of false alarms for sentences with a pre-verbal object pronoun (3%) compared to locally ambiguous sentences (about 20%). Participants rapidly incorporate information about a verb's argument structure to constrain lexical access to verbs that match the expected subcategorization frame.

18.
Child Dev ; 85(3): 1168-1180, 2014.
Article in English | MEDLINE | ID: mdl-24117408

ABSTRACT

Previous work has shown that toddlers readily encode each noun in the sentence as a distinct argument of the verb. However, languages allow multiple mappings between form and meaning that do not fit this canonical format. Two experiments examined French 28-month-olds' interpretation of right-dislocated sentences (nouni -verb, nouni) where the presence of clear, language-specific cues should block such a canonical mapping. Toddlers (N = 96) interpreted novel verbs embedded in these sentences as transitive, disregarding prosodic cues to dislocation (Experiment 1) but correctly interpreted right-dislocated sentences containing well-known verbs (Experiment 2). These results suggest that toddlers can integrate multiple cues in ideal conditions, but default to canonical surface-to-meaning mapping when extracting structural information about novel verbs in semantically impoverished conditions.


Subject(s)
Language Development , Learning/physiology , Psycholinguistics , Child, Preschool , Cues , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...