Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Dev Cogn Neurosci ; 59: 101181, 2023 02.
Article in English | MEDLINE | ID: mdl-36549148

ABSTRACT

Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5-27 years) track phrasal and syllabic structures in connected speech mixed with different types of noise. While the extraction of prosodic cues from clear speech was stable during development, its maintenance in a multi-talker background matured rapidly up to age 9 and was associated with speech comprehension. Furthermore, while the extraction of subtler information provided by syllables matured at age 9, its maintenance in noisy backgrounds progressively matured until adulthood. Altogether, these results highlight distinct behaviorally relevant maturational trajectories for the neuronal signatures of speech perception. In accordance with grain-size proposals, neuromaturational milestones are reached increasingly late for linguistic units of decreasing size, with further delays incurred by noise.


Subject(s)
Speech Perception , Speech , Humans , Adult , Child , Speech/physiology , Noise , Magnetoencephalography , Linguistics , Speech Perception/physiology
2.
Neuroimage ; 253: 119061, 2022 06.
Article in English | MEDLINE | ID: mdl-35259526

ABSTRACT

Dyslexia is a frequent developmental disorder in which reading acquisition is delayed and that is usually associated with difficulties understanding speech in noise. At the neuronal level, children with dyslexia were reported to display abnormal cortical tracking of speech (CTS) at phrasal rate. Here, we aimed to determine if abnormal tracking relates to reduced reading experience, and if it is modulated by the severity of dyslexia or the presence of acoustic noise. We included 26 school-age children with dyslexia, 26 age-matched controls and 26 reading-level matched controls. All were native French speakers. Children's brain activity was recorded with magnetoencephalography while they listened to continuous speech in noiseless and multiple noise conditions. CTS values were compared between groups, conditions and hemispheres, and also within groups, between children with mild and severe dyslexia. Syllabic CTS was significantly reduced in the right superior temporal gyrus in children with dyslexia compared with controls matched for age but not for reading level. Severe dyslexia was characterized by lower rapid automatized naming (RAN) abilities compared with mild dyslexia, and phrasal CTS lateralized to the right hemisphere in children with mild dyslexia and all control groups but not in children with severe dyslexia. Finally, an alteration in phrasal CTS was uncovered in children with dyslexia compared with age-matched controls in babble noise conditions but not in other less challenging listening conditions (non-speech noise or noiseless conditions); no such effect was seen in comparison with reading-level matched controls. Overall, our results confirmed the finding of altered neuronal basis of speech perception in noiseless and babble noise conditions in dyslexia compared with age-matched peers. However, the absence of alteration in comparison with reading-level matched controls demonstrates that such alterations are associated with reduced reading level, suggesting they are merely driven by reduced reading experience rather than a cause of dyslexia. Finally, our result of altered hemispheric lateralization of phrasal CTS in relation with altered RAN abilities in severe dyslexia is in line with a temporal sampling deficit of speech at phrasal rate in dyslexia.


Subject(s)
Dyslexia , Speech Perception , Child , Humans , Magnetoencephalography , Noise , Phonetics , Speech/physiology , Speech Perception/physiology
3.
PLoS Biol ; 18(8): e3000840, 2020 08.
Article in English | MEDLINE | ID: mdl-32845876

ABSTRACT

Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.


Subject(s)
Cerebral Cortex/physiology , Noise , Reading , Speech/physiology , Behavior , Child , Dyslexia/physiopathology , Humans , Linear Models , Neuroimaging , Phonetics
4.
Neuroimage ; 184: 201-213, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30205208

ABSTRACT

During connected speech listening, brain activity tracks speech rhythmicity at delta (∼0.5 Hz) and theta (4-8 Hz) frequencies. Here, we compared the potential of magnetoencephalography (MEG) and high-density electroencephalography (EEG) to uncover such speech brain tracking. Ten healthy right-handed adults listened to two different 5-min audio recordings, either without noise or mixed with a cocktail-party noise of equal loudness. Their brain activity was simultaneously recorded with MEG and EEG. We quantified speech brain tracking channel-by-channel using coherence, and with all channels at once by speech temporal envelope reconstruction accuracy. In both conditions, speech brain tracking was significant at delta and theta frequencies and peaked in the temporal regions with both modalities (MEG and EEG). However, in the absence of noise, speech brain tracking estimated from MEG data was significantly higher than that obtained from EEG. Furthemore, to uncover significant speech brain tracking, recordings needed to be ∼3 times longer in EEG than MEG, depending on the frequency considered (delta or theta) and the estimation method. In the presence of noise, both EEG and MEG recordings replicated the previous finding that speech brain tracking at delta frequencies is stronger with attended speech (i.e., the sound subjects are attending to) than with the global sound (i.e., the attended speech and the noise combined). Other previously reported MEG findings were replicated based on MEG but not EEG recordings: 1) speech brain tracking at theta frequencies is stronger with attended speech than with the global sound, 2) speech brain tracking at delta frequencies is stronger in noiseless than noisy conditions, and 3) when noise is added, speech brain tracking at delta frequencies dampens less in the left hemisphere than in the right hemisphere. Finally, sources of speech brain tracking reconstructed from EEG data were systematically deeper and more posterior than those derived from MEG. The present study demonstrates that speech brain tracking is better seen with MEG than EEG. Quantitatively, EEG recordings need to be ∼3 times longer than MEG recordings to uncover significant speech brain tracking. As a consequence, MEG appears more suited than EEG to pinpoint subtle effects related to speech brain tracking in a given recording time.


Subject(s)
Auditory Cortex/physiology , Electroencephalography , Magnetoencephalography , Speech Acoustics , Acoustic Stimulation , Adult , Brain Mapping/methods , Delta Rhythm , Female , Humans , Male , Noise , Theta Rhythm , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...