Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Neurobiol Lang (Camb) ; 5(2): 432-453, 2024.
Article in English | MEDLINE | ID: mdl-38911458

ABSTRACT

Research points to neurofunctional differences underlying fluent speech between stutterers and non-stutterers. Considerably less work has focused on processes that underlie stuttered vs. fluent speech. Additionally, most of this research has focused on speech motor processes despite contributions from cognitive processes prior to the onset of stuttered speech. We used MEG to test the hypothesis that reactive inhibitory control is triggered prior to stuttered speech. Twenty-nine stutterers completed a delayed-response task that featured a cue (prior to a go cue) signaling the imminent requirement to produce a word that was either stuttered or fluent. Consistent with our hypothesis, we observed increased beta power likely emanating from the right pre-supplementary motor area (R-preSMA)-an area implicated in reactive inhibitory control-in response to the cue preceding stuttered vs. fluent productions. Beta power differences between stuttered and fluent trials correlated with stuttering severity and participants' percentage of trials stuttered increased exponentially with beta power in the R-preSMA. Trial-by-trial beta power modulations in the R-preSMA following the cue predicted whether a trial would be stuttered or fluent. Stuttered trials were also associated with delayed speech onset suggesting an overall slowing or freezing of the speech motor system that may be a consequence of inhibitory control. Post-hoc analyses revealed that independently generated anticipated words were associated with greater beta power and more stuttering than researcher-assisted anticipated words, pointing to a relationship between self-perceived likelihood of stuttering (i.e., anticipation) and inhibitory control. This work offers a neurocognitive account of stuttering by characterizing cognitive processes that precede overt stuttering events.

2.
Adv Exp Med Biol ; 1455: 257-274, 2024.
Article in English | MEDLINE | ID: mdl-38918356

ABSTRACT

Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.


Subject(s)
Speech , Humans , Speech/physiology , Speech Perception/physiology , Speech Acoustics , Periodicity
3.
Cognition ; 245: 105737, 2024 04.
Article in English | MEDLINE | ID: mdl-38342068

ABSTRACT

Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.


Subject(s)
Individuality , Speech Perception , Humans , Learning , Linguistics , Language Development , Speech
4.
J Speech Lang Hear Res ; 66(5): 1631-1638, 2023 05 09.
Article in English | MEDLINE | ID: mdl-37059075

ABSTRACT

PURPOSE: Most neural and physiological research on stuttering focuses on the fluent speech of speakers who stutter due to the difficulty associated with eliciting stuttering reliably in the laboratory. We previously introduced an approach to elicit stuttered speech in the laboratory in adults who stutter. The purpose of this study was to determine whether that approach reliably elicits stuttering in school-age children and teenagers who stutter (CWS/TWS). METHOD: Twenty-three CWS/TWS participated. A clinical interview was used to identify participant-specific anticipated and unanticipated words in CWS and TWS. Two tasks were administered: (a) a delayed word reading task in which participants read words and produced them after a 5-s delay and (b) a delayed response question task in which participants responded to examiner questions after a 5-s delay. Two CWS and eight TWS completed the reading task; six CWS and seven TWS completed the question task. Trials were coded as unambiguously fluent, ambiguous, and unambiguously stuttered. RESULTS: The method yielded, at a group level, a near-equal distribution of unambiguously stuttered and fluent utterances: 42.5% and 45.1%, respectively, in the reading task and 40.5% and 51.4%, respectively, in the question task. CONCLUSIONS: The method presented in this article elicited a comparable amount of unambiguously stuttered and fluent trials in CWS and TWS, at a group level, during two different word production tasks. The inclusion of different tasks supports the generalizability of our approach, which can be used to elicit stuttering in studies that aim to unravel the neural and physiological bases that underlie stuttered speech.


Subject(s)
Stuttering , Adult , Child , Humans , Adolescent , Speech/physiology , Schools , Speech Production Measurement , Reading
5.
PLoS Biol ; 20(7): e3001712, 2022 07.
Article in English | MEDLINE | ID: mdl-35793349

ABSTRACT

People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory-motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory-motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.


Subject(s)
Speech Perception , Speech , Brain Mapping , Humans , Magnetic Resonance Imaging , Speech/physiology , Speech Perception/physiology
6.
STAR Protoc ; 3(2): 101248, 2022 06 17.
Article in English | MEDLINE | ID: mdl-35310080

ABSTRACT

The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).


Subject(s)
Acoustic Stimulation , Speech , Humans
7.
PLoS Biol ; 19(9): e3001119, 2021 09.
Article in English | MEDLINE | ID: mdl-34491980

ABSTRACT

Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate-on 2 different cohorts-that a temporal difference model, which relies on prediction errors, accounts for participants' online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.


Subject(s)
Corpus Striatum/physiology , Language Development , Probability Learning , Reinforcement, Psychology , Corpus Striatum/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Pattern Recognition, Physiological , Young Adult
8.
Cereb Cortex ; 31(5): 2505-2522, 2021 03 31.
Article in English | MEDLINE | ID: mdl-33338212

ABSTRACT

Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation.


Subject(s)
Auditory Pathways/physiopathology , Blindness/physiopathology , Frontal Lobe/physiopathology , Visual Pathways/physiopathology , Adult , Auditory Pathways/diagnostic imaging , Auditory Pathways/physiology , Blindness/diagnostic imaging , Diffusion Tensor Imaging , Female , Frontal Lobe/diagnostic imaging , Frontal Lobe/physiology , Humans , Magnetic Resonance Imaging , Magnetoencephalography , Male , Middle Aged , Neuronal Plasticity/physiology , Occipital Lobe/diagnostic imaging , Occipital Lobe/physiology , Occipital Lobe/physiopathology , Visual Pathways/diagnostic imaging , Visual Pathways/physiology , White Matter/diagnostic imaging , White Matter/physiology , White Matter/physiopathology , Young Adult
9.
PLoS Biol ; 18(11): e3000895, 2020 11.
Article in English | MEDLINE | ID: mdl-33137084

ABSTRACT

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., "These cupcakes are unbelievable"), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants' peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants' ability to integrate "what" (stimulus identity) and "when" (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention-involving left parietal regions-integrates "what" and "when" stimulus information to facilitate rapid rule generalization.


Subject(s)
Attention/physiology , Learning/physiology , Parietal Lobe/physiology , Adult , Brain/physiology , Brain Mapping/methods , Cognition/physiology , Female , Frontal Lobe/physiology , Functional Laterality/physiology , Humans , Language , Linguistics/methods , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Reaction Time/physiology , Transcranial Magnetic Stimulation/methods , Young Adult
10.
Nat Neurosci ; 22(4): 627-632, 2019 04.
Article in English | MEDLINE | ID: mdl-30833700

ABSTRACT

We introduce a deceptively simple behavioral task that robustly identifies two qualitatively different groups within the general population. When presented with an isochronous train of random syllables, some listeners are compelled to align their own concurrent syllable production with the perceived rate, whereas others remain impervious to the external rhythm. Using both neurophysiological and structural imaging approaches, we show group differences with clear consequences for speech processing and language learning. When listening passively to speech, high synchronizers show increased brain-to-stimulus synchronization over frontal areas, and this localized pattern correlates with precise microstructural differences in the white matter pathways connecting frontal to auditory regions. Finally, the data expose a mechanism that underpins performance on an ecologically relevant word-learning task. We suggest that this task will help to better understand and characterize individual performance in speech processing and language learning.


Subject(s)
Brain/anatomy & histology , Brain/physiology , Language , Learning/physiology , Speech Perception/physiology , Speech , Acoustic Stimulation , Adult , Brain Mapping , Female , Humans , Individuality , Magnetic Resonance Imaging , Magnetoencephalography , Male , Middle Aged , Neural Pathways/anatomy & histology , Neural Pathways/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...