Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
eNeuro ; 7(6)2020.
Article in English | MEDLINE | ID: mdl-33199412

ABSTRACT

Children's sensitivity to regularities within the linguistic stream, such as the likelihood that syllables co-occur, is foundational to speech segmentation and language acquisition. Yet, little is known about the neurocognitive mechanisms underlying speech segmentation in typical development and in neurodevelopmental disorders that impact language acquisition such as autism spectrum disorder (ASD). Here, we investigate the neural signals of statistical learning in 15 human participants (children ages 8-12) with a clinical diagnosis of ASD and 14 age-matched and gender-matched typically developing peers. We tracked the evoked neural responses to syllable sequences in a naturalistic statistical learning corpus using magnetoencephalography (MEG) in the left primary auditory cortex, posterior superior temporal gyrus (pSTG), and inferior frontal gyrus (IFG), across three repetitions of the passage. In typically developing children, we observed a neural index of learning in all three regions of interest (ROIs), measured by the change in evoked response amplitude as a function of syllable surprisal across passage repetitions. As surprisal increased, the amplitude of the neural response increased; this sensitivity emerged after repeated exposure to the corpus. Children with ASD did not show this pattern of learning in all three regions. We discuss two possible hypotheses related to children's sensitivity to bottom-up sensory deficits and difficulty with top-down incremental processing.


Subject(s)
Auditory Cortex , Autism Spectrum Disorder , Child , Humans , Learning , Magnetoencephalography
2.
Ann N Y Acad Sci ; 1337: 7-15, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25773611

ABSTRACT

New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development.


Subject(s)
Child Development Disorders, Pervasive/physiopathology , Language Development , Multimodal Imaging/methods , Acoustic Stimulation , Auditory Perception/physiology , Brain/pathology , Brain Mapping , Child , Humans , Language , Learning , Magnetic Resonance Imaging , Magnetoencephalography , Music , Reading , Sound , Spectroscopy, Near-Infrared , Speech Perception/physiology , Time Factors
3.
Percept Psychophys ; 69(1): 113-22, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17515221

ABSTRACT

This study was designed to test the iambic/trochaic law, which claims that elements contrasting in duration naturally form rhythmic groupings with final prominence, whereas elements contrasting in intensity form groupings with initial prominence. It was also designed to evaluate whether the iambic/trochaic law describes general auditory biases, or whether rhythmic grouping is speech or language specific. In two experiments, listeners were presented with sequences of alternating /ga/ syllables or square wave segments that varied in either duration or intensity and were asked to indicate whether they heard a trochaic (i.e., strong-weak) or an iambic (i.e., weak-strong) rhythmic pattern. Experiment 1 provided a validation of the iambic/trochaic law in English-speaking listeners; for both speech and nonspeech stimuli, variations in duration resulted in iambic grouping, whereas variations in intensity resulted in trochaic grouping. In Experiment 2, no significant differences were found between the rhythmic-grouping performances of English- and French-speaking listeners. The speech/ nonspeech and cross-language parallels suggest that the perception of linguistic rhythm relies largely on general auditory mechanisms. The applicability of the iambic/trochaic law to speech segmentation is discussed.


Subject(s)
Attention , Speech Acoustics , Speech Perception , Time Perception , Auditory Perception , Communication Aids for Disabled , Humans , Language , Loudness Perception , Phonetics
SELECTION OF CITATIONS
SEARCH DETAIL
...