Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Front Hum Neurosci ; 18: 1380075, 2024.
Article in English | MEDLINE | ID: mdl-38756844

ABSTRACT

Introduction: Previous studies underscore the importance of speech input, particularly infant-directed speech (IDS) during one-on-one (1:1) parent-infant interaction, for child language development. We hypothesize that infants' attention to speech input, specifically IDS, supports language acquisition. In infants, attention and orienting responses are associated with heart rate deceleration. We examined whether individual differences in infants' heart rate measured during 1:1 mother-infant interaction is related to speech input and later language development scores in a longitudinal study. Methods: Using a sample of 31 3-month-olds, we assessed infant heart rate during mother-infant face-to-face interaction in a laboratory setting. Multiple measures of speech input were gathered at 3 months of age during naturally occurring interactions at home using the Language ENvironment Analysis (LENA) system. Language outcome measures were assessed in the same children at 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Results: Two novel findings emerged. First, we found that higher maternal IDS in a 1:1 context at home, as well as more mother-infant conversational turns at home, are associated with a lower heart rate measured during mother-infant social interaction in the laboratory. Second, we found significant associations between infant heart rate during mother-infant interaction in the laboratory at 3 months and prospective language development (CDI scores) at 30 months of age. Discussion: Considering the current results in conjunction with other converging theoretical and neuroscientific data, we argue that high IDS input in the context of 1:1 social interaction increases infants' attention to speech and that infants' attention to speech in early development fosters their prospective language growth.

2.
Curr Biol ; 34(8): 1731-1738.e3, 2024 04 22.
Article in English | MEDLINE | ID: mdl-38593800

ABSTRACT

In face-to-face interactions with infants, human adults exhibit a species-specific communicative signal. Adults present a distinctive "social ensemble": they use infant-directed speech (parentese), respond contingently to infants' actions and vocalizations, and react positively through mutual eye-gaze and smiling. Studies suggest that this social ensemble is essential for initial language learning. Our hypothesis is that the social ensemble attracts attentional systems to speech and that sensorimotor systems prepare infants to respond vocally, both of which advance language learning. Using infant magnetoencephalography (MEG), we measure 5-month-old infants' neural responses during live verbal face-to-face (F2F) interaction with an adult (social condition) and during a control (nonsocial condition) in which the adult turns away from the infant to speak to another person. Using a longitudinal design, we tested whether infants' brain responses to these conditions at 5 months of age predicted their language growth at five future time points. Brain areas involved in attention (right hemisphere inferior frontal, right hemisphere superior temporal, and right hemisphere inferior parietal) show significantly higher theta activity in the social versus nonsocial condition. Critical to theory, we found that infants' neural activity in response to F2F interaction in attentional and sensorimotor regions significantly predicted future language development into the third year of life, more than 2 years after the initial measurements. We develop a view of early language acquisition that underscores the centrality of the social ensemble, and we offer new insight into the neurobiological components that link infants' language learning to their early brain functioning during social interaction.


Subject(s)
Brain , Language Development , Magnetoencephalography , Social Interaction , Humans , Infant , Male , Female , Brain/physiology , Attention/physiology , Speech/physiology
3.
Infant Behav Dev ; 75: 101929, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38581728

ABSTRACT

Previous studies underscore the importance of social interactions for child language development-particularly interactions characterized by maternal sensitivity, infant-directed speech (IDS), and conversational turn-taking (CT) in one-on-one contexts. Although infants engage in such interactions from the third month after birth, the prospective link between speech input and maternal sensitivity in the first half year of life and later language development has been understudied. We hypothesized that social interactions embodying maternal sensitivity, IDS and CTs in the first 3 months of life, are significantly associated with later language development and tested this using a longitudinal design. Using a sample of 40 3-month-old infants, we assessed maternal sensitivity during a structured mother-infant one-on-one (1:1) interaction based on a well-validated scoring system (the Coding Interactive Behavior system). Language input (IDS, CT) was assessed during naturally occurring interactions at home using the Language ENvironment Analysis (LENA) system. Language outcome measures were obtained from 18 to 30 months of age using the MacArthur-Bates Communicative Development Inventory. Three novel findings emerged. First, maternal sensitivity at 3 months was significantly associated with infants' productive language scores at 18, 21, 24, 27, and 30 months of age. Second, LENA-recorded IDS during mother-infant 1:1 interaction in the home environment at 3 months of age was positively correlated with productive language scores at 24, 27, and 30 months of age. Third, mother-infant CTs during 1:1 interaction was significantly associated with infants' productive language scores at 27 and 30 months of age. We propose that infants' social attention to speech during this early period-enhanced by sensitive maternal one-on-one interactions and IDS-are potent factors in advancing language development.


Subject(s)
Language Development , Mother-Child Relations , Humans , Female , Infant , Mother-Child Relations/psychology , Male , Longitudinal Studies , Child, Preschool , Speech/physiology , Adult , Mothers/psychology , Social Interaction , Maternal Behavior/psychology , Maternal Behavior/physiology
4.
Front Neurol ; 13: 827529, 2022.
Article in English | MEDLINE | ID: mdl-35401424

ABSTRACT

We discuss specific challenges and solutions in infant MEG, which is one of the most technically challenging areas of MEG studies. Our results can be generalized to a variety of challenging scenarios for MEG data acquisition, including clinical settings. We cover a wide range of steps in pre-processing, including movement compensation, suppression of magnetic interference from sources inside and outside the magnetically shielded room, suppression of specific physiological artifact components such as cardiac artifacts. In the assessment of the outcome of the pre-processing algorithms, we focus on comparing signal representation before and after pre-processing and discuss the importance of the different components of the main processing steps. We discuss the importance of taking the noise covariance structure into account in inverse modeling and present the proper treatment of the noise covariance matrix to accurately reflect the processing that was applied to the data. Using example cases, we investigate the level of source localization error before and after processing. One of our main findings is that statistical metrics of source reconstruction may erroneously indicate that the results are reliable even in cases where the data are severely distorted by head movements. As a consequence, we stress the importance of proper signal processing in infant MEG.

5.
Hum Brain Mapp ; 43(12): 3609-3619, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35429095

ABSTRACT

The excellent temporal resolution and advanced spatial resolution of magnetoencephalography (MEG) makes it an excellent tool to study the neural dynamics underlying cognitive processes in the developing brain. Nonetheless, a number of challenges exist when using MEG to image infant populations. There is a persistent belief that collecting MEG data with infants presents a number of limitations and challenges that are difficult to overcome. Due to this notion, many researchers either avoid conducting infant MEG research or believe that, in order to collect high-quality data, they must impose limiting restrictions on the infant or the experimental paradigm. In this article, we discuss the various challenges unique to imaging awake infants and young children with MEG, and share general best-practice guidelines and recommendations for data collection, acquisition, preprocessing, and analysis. The current article is focused on methodology that allows investigators to test the sensory, perceptual, and cognitive capacities of awake and moving infants. We believe that such methodology opens the pathway for using MEG to provide mechanistic explanations for the complex behavior observed in awake, sentient, and dynamically interacting infants, thus addressing core topics in developmental cognitive neuroscience.


Subject(s)
Brain , Magnetoencephalography , Brain/diagnostic imaging , Brain Mapping/methods , Child , Child, Preschool , Head , Humans , Infant , Magnetoencephalography/methods
6.
Dev Cogn Neurosci ; 47: 100901, 2021 02.
Article in English | MEDLINE | ID: mdl-33360832

ABSTRACT

Word learning is a significant milestone in language acquisition. The second year of life marks a period of dramatic advances in infants' expressive and receptive word-processing abilities. Studies show that in adulthood, language processing is left-hemisphere dominant. However, adults learning a second language activate right-hemisphere brain functions. In infancy, acquisition of a first language involves recruitment of bilateral brain networks, and strong left-hemisphere dominance emerges by the third year. In the current study we focus on 14-month-old infants in the earliest stages of word learning using infant magnetoencephalography (MEG) brain imagining to characterize neural activity in response to familiar and unfamiliar words. Specifically, we examine the relationship between right-hemisphere brain responses and prospective measures of vocabulary growth. As expected, MEG source modeling revealed a broadly distributed network in frontal, temporal and parietal cortex that distinguished word classes between 150-900 ms after word onset. Importantly, brain activity in the right frontal cortex in response to familiar words was highly correlated with vocabulary growth at 18, 21, 24, and 27 months. Specifically, higher activation to familiar words in the 150-300 ms interval was associated with faster vocabulary growth, reflecting processing efficiency, whereas higher activation to familiar words in the 600-900 ms interval was associated with slower vocabulary growth, reflecting cognitive effort. These findings inform research and theory on the involvement of right frontal cortex in specific cognitive processes and individual differences related to attention that may play an important role in the development of left-lateralized word processing.


Subject(s)
Language , Magnetoencephalography , Brain Mapping , Child, Preschool , Humans , Infant , Prospective Studies , Vocabulary
7.
Dev Sci ; 20(6)2017 Nov.
Article in English | MEDLINE | ID: mdl-27747989

ABSTRACT

The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.].


Subject(s)
Attention/physiology , Language Development , Recognition, Psychology/physiology , Verbal Learning/physiology , Vocabulary , Acoustic Stimulation , Analysis of Variance , Brain/physiology , Child, Preschool , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Infant , Male , Photic Stimulation
8.
PLoS One ; 11(9): e0162177, 2016.
Article in English | MEDLINE | ID: mdl-27617967

ABSTRACT

Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS) facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs) were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS) in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning) and speech register (ADS vs. IDS) would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0-100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200-400 ms and 450-650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns.


Subject(s)
Evoked Potentials , Learning , Speech , Electroencephalography , Female , Finland , Humans , Infant, Newborn , Male
9.
Proc Natl Acad Sci U S A ; 111(31): 11238-45, 2014 Aug 05.
Article in English | MEDLINE | ID: mdl-25024207

ABSTRACT

Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.


Subject(s)
Brain/physiology , Speech Perception/physiology , Speech/physiology , Adult , Algorithms , Auditory Perception/physiology , Frontal Lobe/physiology , Humans , Infant , Language , Magnetoencephalography , Motor Cortex/physiology , Temporal Lobe/physiology
10.
Front Psychol ; 4: 690, 2013.
Article in English | MEDLINE | ID: mdl-24130536

ABSTRACT

The development of speech perception shows a dramatic transition between infancy and adulthood. Between 6 and 12 months, infants' initial ability to discriminate all phonetic units across the world's languages narrows-native discrimination increases while non-native discrimination shows a steep decline. We used magnetoencephalography (MEG) to examine whether brain oscillations in the theta band (4-8 Hz), reflecting increases in attention and cognitive effort, would provide a neural measure of the perceptual narrowing phenomenon in speech. Using an oddball paradigm, we varied speech stimuli in two dimensions, stimulus frequency (frequent vs. infrequent) and language (native vs. non-native speech syllables) and tested 6-month-old infants, 12-month-old infants, and adults. We hypothesized that 6-month-old infants would show increased relative theta power (RTP) for frequent syllables, regardless of their status as native or non-native syllables, reflecting young infants' attention and cognitive effort in response to highly frequent stimuli ("statistical learning"). In adults, we hypothesized increased RTP for non-native stimuli, regardless of their presentation frequency, reflecting increased cognitive effort for non-native phonetic categories. The 12-month-old infants were expected to show a pattern in transition, but one more similar to adults than to 6-month-old infants. The MEG brain rhythm results supported these hypotheses. We suggest that perceptual narrowing in speech perception is governed by an implicit learning process. This learning process involves an implicit shift in attention from frequent events (infants) to learned categories (adults). Theta brain oscillatory activity may provide an index of perceptual narrowing beyond speech, and would offer a test of whether the early speech learning process is governed by domain-general or domain-specific processes.

11.
Autism ; 10(5): 495-510, 2006 Sep.
Article in English | MEDLINE | ID: mdl-16940315

ABSTRACT

A computer-animated tutor, Baldi, has been successful in teaching vocabulary and grammar to children with autism and those with hearing problems. The present study assessed to what extent the face facilitated this learning process relative to the voice alone. Baldi was implemented in a Language Wizard/Tutor, which allows easy creation and presentation of a vocabulary lesson involving the association of pictures and spoken words. The lesson plan included both the receptive identification of pictures and the production of spoken words. A within-subject design with five children with autism followed an alternating treatment in which each child continuously learned to criterion sets of words with and without the face. The rate of learning was significantly faster and the retention was better with the face. The research indicates that at least some children with autism benefit from the face in learning new language within an automated program.


Subject(s)
Autistic Disorder , Face , Lipreading , Teaching/methods , User-Computer Interface , Verbal Learning , Vocabulary , Adolescent , Child , Facial Expression , Female , Humans , Male
12.
Res Dev Disabil ; 25(6): 559-75, 2004.
Article in English | MEDLINE | ID: mdl-15541632

ABSTRACT

Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.


Subject(s)
Autistic Disorder/therapy , Imitative Behavior , Language Development Disorders/therapy , Lipreading , Speech Perception , Therapy, Computer-Assisted , User-Computer Interface , Adolescent , Audiovisual Aids , Autistic Disorder/psychology , California , Child , Child, Preschool , Communication Aids for Disabled , Female , Fuzzy Logic , Humans , Language Development Disorders/psychology , Male , Models, Theoretical , Reference Values , Scotland , Software , Statistics as Topic , Treatment Outcome
13.
J Autism Dev Disord ; 33(6): 653-72, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14714934

ABSTRACT

Using our theoretical framework of multimodal processing, we developed and evaluated a computer-animated tutor, Baldi, to teach vocabulary and grammar for children with autism. Baldi was implemented in a Language Wizard/Player, which allows easy creation and presentation of a language lesson involving the association of pictures and spoken words. The lesson plan includes both the identification of pictures and the production of spoken words. In Experiment 1, eight children were given initial assessment tests, tutorials, and reassessment tests 30 days following mastery of the vocabulary items. All of the students learned a significant number of new words and grammar. A second within-subject design with six children followed a multiple baseline design and documented that the program was responsible for the learning and generalization of new words. The research indicates that children with autism are capable of learning new language within an automated program centered around a computer-animated agent, multimedia, and active participation and can transfer and use the language in a natural, untrained environment.


Subject(s)
Autistic Disorder , Computer-Assisted Instruction/standards , Language , Teaching/methods , Teaching/standards , Verbal Learning , Vocabulary , Child , Computer Simulation , Computer-Assisted Instruction/instrumentation , Computer-Assisted Instruction/methods , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...