Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 139
Filter
1.
Brain Lang ; 256: 105463, 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39243486

ABSTRACT

We investigated how neural oscillations code the hierarchical nature of stress rhythms in speech and how stress processing varies with language experience. By measuring phase synchrony of multilevel EEG-acoustic tracking and intra-brain cross-frequency coupling, we show the encoding of stress involves different neural signatures (delta rhythms = stress foot rate; theta rhythms = syllable rate), is stronger for amplitude vs. duration stress cues, and induces nested delta-theta coherence mirroring the stress-syllable hierarchy in speech. Only native English, but not Mandarin, speakers exhibited enhanced neural entrainment at central stress (2 Hz) and syllable (4 Hz) rates intrinsic to natural English. English individuals with superior cortical-stress tracking capabilities also displayed stronger neural hierarchical coherence, highlighting a nuanced interplay between internal nesting of brain rhythms and external entrainment rooted in language-specific speech rhythms. Our cross-language findings reveal brain-speech synchronization is not purely a "bottom-up" but benefits from "top-down" processing from listeners' language-specific experience.

2.
J Autism Dev Disord ; 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39177934

ABSTRACT

Research on the phonological development of children with autism spectrum disorder (ASD) has not yet reached consistent conclusions, and systematic studies from different language groups are needed. This study aimed to systematically investigate the characteristics of phonological development in 3-6 year-old Mandarin-speaking children with ASD. We analyzed 10 min speech samples from 21 children with ASD, 18 development level-matched children with developmental disorders (DD), and 15 chronological age-matched typically developing (TD) children during semi-structured parent-child free play based on Mandarin phonological features. The children with ASD had a significantly smaller inventory than those with TD on the initial and final inventories. The children with ASD had only a significantly smaller initial inventory than those with DD in Phases 2 and 4. Compared with TD children, children with ASD used a higher proportion of V1 and V1V2C and a smaller proportion of V1V2V3, CV1C, and CV1V2C. No significant differences existed between ASD and DD children in the proportion of any syllable structure, but V1V2V3, CV1, and CV1V2C numbers were significantly fewer than in DD children. Children with ASD were significantly greater than children with TD in the diversity of V1V2, CV1, and overall syllables. ASD children had significantly fewer different types of syllables in both V1V2C and CV1 than did DD children and significantly greater diversity in CV1 and overall syllables than did DD children. These preliminary data suggest that the gap between TD and ASD children's language abilities increased with age, and this gap was reflected in initial, final, and syllable complexity and diversity. Children with DD and ASD showed similar language abilities, and children with DD showed detailed differences from those with ASD regarding initial, syllable complexity and diversity.

3.
Sci Rep ; 14(1): 20270, 2024 08 31.
Article in English | MEDLINE | ID: mdl-39217249

ABSTRACT

Dysphagia, a disorder affecting the ability to swallow, has a high prevalence among the older adults and can lead to serious health complications. Therefore, early detection of dysphagia is important. This study evaluated the effectiveness of a newly developed deep learning model that analyzes syllable-segmented data for diagnosing dysphagia, an aspect not addressed in prior studies. The audio data of daily conversations were collected from 16 patients with dysphagia and 24 controls. The presence of dysphagia was determined by videofluoroscopic swallowing study. The data were segmented into syllables using a speech-to-text model and analyzed with a convolutional neural network to perform binary classification between the dysphagia patients and control group. The proposed model in this study was assessed in two different aspects. Firstly, with syllable-segmented analysis, it demonstrated a diagnostic accuracy of 0.794 for dysphagia, a sensitivity of 0.901, a specificity of 0.687, a positive predictive value of 0.742, and a negative predictive value of 0.874. Secondly, at the individual level, it achieved an overall accuracy of 0.900 and area under the curve of 0.953. This research highlights the potential of deep learning modal as an early, non-invasive, and simple method for detecting dysphagia in everyday environments.


Subject(s)
Deep Learning , Deglutition Disorders , Speech , Humans , Deglutition Disorders/diagnosis , Deglutition Disorders/physiopathology , Male , Female , Aged , Speech/physiology , Aged, 80 and over , Middle Aged , Deglutition/physiology , Neural Networks, Computer
4.
Neuropsychologia ; 199: 108907, 2024 07 04.
Article in English | MEDLINE | ID: mdl-38734179

ABSTRACT

Studies of letter transposition effects in alphabetic scripts provide compelling evidence that letter position is encoded flexibly during reading, potentially during an early, perceptual stage of visual word recognition. Recent studies additionally suggest similar flexibility in the spatial encoding of syllabic information in the Korean Hangul script. With the present research, we conducted two experiments to investigate the locus of this syllabic transposition effect. In Experiment 1, lexical decisions for foveal stimulus presentations were less accurate and slower for four-syllable nonwords created by transposing two syllables in a base word as compared to control nonwords, replicating prior evidence for a transposed syllable effect in Korean word recognition. In Experiment 2, the same stimuli were presented to the right and left visual hemifields (i.e., RVF and LVF), which project both unilaterally and contralaterally to each participant's left and right cerebral hemisphere (i.e., LH and RH) respectively, using lateralized stimulus displays. Lexical decisions revealed a syllable transposition effect in the accuracy and latency of lexical decisions for both RVF and LVF presentations. However, response times for correct responses were longer in the LVF, and therefore the RH, as compared to the RVF/LH. As the LVF/RH appears to be selectively sensitive to the visual-perceptual attributes of words, the findings suggest that this syllable transposition effect partly finds its locus within a perceptual stage of processing. We discuss these findings in relation to current models of the spatial encoding of orthographic information during visual word recognition and accounts of visual word recognition in Korean.


Subject(s)
Reaction Time , Reading , Humans , Female , Male , Young Adult , Reaction Time/physiology , Functional Laterality/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation , Adult , Visual Fields/physiology , Language
5.
Eur J Neurosci ; 60(3): 4244-4253, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38816916

ABSTRACT

Studying ultrasonic vocalizations (USVs) plays a crucial role in understanding animal communication, particularly in the field of ethology and neuropharmacology. Communication is associated with social behaviour; so, USVs study is a valid assay in behavioural readout and monitoring in this context. This paper delved into an investigation of ultrasonic communication in mice treated with Cannabis sativa oil (CS mice), which has been demonstrated having a prosocial effect on behaviour of mice, versus control mice (vehicle-treated, VH mice). To conduct this study, we created a dataset by recording audio-video files and annotating the duration of time that test mice spent engaging in social activities, along with categorizing the types of emitted USVs. The analysis encompassed the frequency of individual sounds as well as more complex sequences of consecutive syllables (patterns). The primary goal was to examine the extent and nature of diversity in ultrasonic communication patterns emitted by these two groups of mice. As a result, we observed statistically significant differences for each considered pattern length between the two groups of mice. Additionally, the study extended its research by considering specific behaviours, aiming to ascertain whether dissimilarities in ultrasonic communication between CS and VH mice are more pronounced or subtle within distinct behavioural contexts. Our findings suggest that while there is variation in USV communication between the two groups of mice, the degree of this diversity may vary depending on the specific behaviour being observed.


Subject(s)
Plant Oils , Vocalization, Animal , Animals , Mice , Vocalization, Animal/drug effects , Vocalization, Animal/physiology , Male , Plant Oils/pharmacology , Cannabis , Ultrasonics , Social Behavior , Behavior, Animal/drug effects , Behavior, Animal/physiology
6.
Physiol Behav ; 281: 114581, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38734358

ABSTRACT

Bird song is a crucial feature for mate choice and reproduction. Song can potentially communicate information related to the quality of the mate, through song complexity, structure or finer changes in syllable characteristics. It has been shown in zebra finches that those characteristics can be affected by various factors including motivation, hormone levels or extreme temperature. However, although the literature on zebra finch song is substantial, some factors have been neglected. In this paper, we recorded male zebra finches in two breeding contexts (before and after pairing) and in two ambient temperature conditions (stable and variable) to see how those factors could influence song production. We found strong differences between the two breeding contexts: compared to their song before pairing, males that were paired had lower song rate, syllable consistency, frequency and entropy, while surprisingly the amplitude of their syllables increased. Temperature variability had an impact on the extent of these differences, but did not directly affect the song parameters that we measured. Our results describe for the first time how breeding status and temperature variability can affect zebra finch song, and give some new insights into the subtleties of the acoustic communication of this model species.


Subject(s)
Finches , Sexual Behavior, Animal , Temperature , Vocalization, Animal , Animals , Male , Finches/physiology , Vocalization, Animal/physiology , Sexual Behavior, Animal/physiology , Sound Spectrography , Female
7.
bioRxiv ; 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38617227

ABSTRACT

Prior lesion, noninvasive-imaging, and intracranial-electroencephalography (iEEG) studies have documented hierarchical, parallel, and distributed characteristics of human speech processing. Yet, there have not been direct, intracranial observations of the latency with which regions outside the temporal lobe respond to speech, or how these responses are impacted by task demands. We leveraged human intracranial recordings via stereo-EEG to measure responses from diverse forebrain sites during (i) passive listening to /bi/ and /pi/ syllables, and (ii) active listening requiring /bi/-versus-/pi/ categorization. We find that neural response latency increases from a few tens of ms in Heschl's gyrus (HG) to several tens of ms in superior temporal gyrus (STG), superior temporal sulcus (STS), and early parietal areas, and hundreds of ms in later parietal areas, insula, frontal cortex, hippocampus, and amygdala. These data also suggest parallel flow of speech information dorsally and ventrally, from HG to parietal areas and from HG to STG and STS, respectively. Latency data also reveal areas in parietal cortex, frontal cortex, hippocampus, and amygdala that are not responsive to the stimuli during passive listening but are responsive during categorization. Furthermore, multiple regions-spanning auditory, parietal, frontal, and insular cortices, and hippocampus and amygdala-show greater neural response amplitudes during active versus passive listening (a task-related effect). Overall, these results are consistent with hierarchical processing of speech at a macro level and parallel streams of information flow in temporal and parietal regions. These data also reveal regions where the speech code is stimulus-faithful and those that encode task-relevant representations.

8.
Brain Sci ; 14(3)2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38539585

ABSTRACT

Brain-Computer Interfaces (BCIs) aim to establish a pathway between the brain and an external device without the involvement of the motor system, relying exclusively on neural signals. Such systems have the potential to provide a means of communication for patients who have lost the ability to speak due to a neurological disorder. Traditional methodologies for decoding imagined speech directly from brain signals often deploy static classifiers, that is, decoders that are computed once at the beginning of the experiment and remain unchanged throughout the BCI use. However, this approach might be inadequate to effectively handle the non-stationary nature of electroencephalography (EEG) signals and the learning that accompanies BCI use, as parameters are expected to change, and all the more in a real-time setting. To address this limitation, we developed an adaptive classifier that updates its parameters based on the incoming data in real time. We first identified optimal parameters (the update coefficient, UC) to be used in an adaptive Linear Discriminant Analysis (LDA) classifier, using a previously recorded EEG dataset, acquired while healthy participants controlled a binary BCI based on imagined syllable decoding. We subsequently tested the effectiveness of this optimization in a real-time BCI control setting. Twenty healthy participants performed two BCI control sessions based on the imagery of two syllables, using a static LDA and an adaptive LDA classifier, in randomized order. As hypothesized, the adaptive classifier led to better performances than the static one in this real-time BCI control task. Furthermore, the optimal parameters for the adaptive classifier were closely aligned in both datasets, acquired using the same syllable imagery task. These findings highlight the effectiveness and reliability of adaptive LDA classifiers for real-time imagined speech decoding. Such an improvement can shorten the training time and favor the development of multi-class BCIs, representing a clear interest for non-invasive systems notably characterized by low decoding accuracies.

9.
Ann Dyslexia ; 74(2): 244-270, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38366193

ABSTRACT

Learning to read is a middle-distance race for children worldwide. Most of them succeed in this acquisition with "normal" difficulties that ensue from the progressive (re)structuring of the phonological and orthographic systems. Evidence accumulated on reading difficulties in children with developmental dyslexia (DYS children, henceforth) shows a pervasive phonological deficit. However, the phonological deficit may not be due to degraded phonological representations but rather due to impaired access to them. This study focused on how and to what extent phonological syllables, which are essential reading units in French, were accessible to DYS children to segment and access words. We tested the assumption that DYS children did not strictly have pervasive degraded phonological representations but also have impaired access to phonological and orthographic representations. We administered a visually adapted word-spotting paradigm, engaging both sublexical processing and lexical access, with French native-speaking DYS children (N = 25; Mage in months = 121.6, SD = 3.0) compared with chronological age-matched peers (N = 25; Mage in months = 121.8, SD = 2.7; CA peers henceforth) and reading level-matched peers (N = 25; Mage in months = 94.0, SD = 4.6; RL peers henceforth). Although DYS children were slower and less accurate than CA and RL peers, we found that they used phonological syllables to access and segment words. However, they exhibit neither the classical inhibitory syllable frequency effect nor the lexical frequency effect, which is generally observed in typically developing children. Surprisingly, DYS children did not show strictly degraded phonological representations because they demonstrated phonological syllable-based segmentation abilities, particularly with high-frequency syllables. Their difficulties are rather interpreted in terms of impaired access to orthographic and phonological representations, which could be a direct effect of difficulties in generalizing and consolidating low-frequency syllables. We discuss these results regarding reading acquisition and the specificities of the French linguistic system.


Subject(s)
Dyslexia , Phonetics , Reading , Humans , Dyslexia/physiopathology , Child , Male , Female
10.
Br J Dev Psychol ; 42(2): 177-186, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38247209

ABSTRACT

Older adults have even greater difficulty learning name-face associations than young adults, although many variables reflecting properties of the names have been shown to affect young and older adults' name learning similarly. Older adults' name-face association learning was compared for names with high-frequency (HF) first syllables versus names with low-frequency (LF) first syllables. Twenty-eight adults ages 65 to 80 learned five names with HF first syllables and five names with LF first syllables in association with 10 new faces over repeated testing rounds with feedback. Participants learned more name-face associations when the names had HF first syllables than LF first syllables. Findings indicate that older adults benefit from increased frequency of phonological segments within a word on a task other than word retrieval and are consistent with a theoretical framework that accounts for learning new name-face associations, the effects of linguistic properties of the names, and ageing.


Subject(s)
Association Learning , Mental Recall , Young Adult , Humans , Aged , Face , Learning , Aging
11.
Brain Sci ; 14(1)2024 Jan 06.
Article in English | MEDLINE | ID: mdl-38248273

ABSTRACT

Apraxia of speech is a persistent speech motor disorder that affects speech intelligibility. Studies on speech motor disorders with transcranial Direct Current Stimulation (tDCS) have been mostly directed toward examining post-stroke aphasia. Only a few tDCS studies have focused on apraxia of speech or childhood apraxia of speech (CAS), and no study has investigated individuals with CAS and Trisomy 21 (T21, Down syndrome). This N-of-1 randomized trial examined the effects of tDCS combined with a motor learning task in developmental apraxia of speech co-existing with T21 (ReBEC RBR-5435x9). The accuracy of speech sound production of nonsense words (NSWs) during Rapid Syllable Transition Training (ReST) over 10 sessions of anodal tDCS (1.5 mA, 25 cm) over Broca's area with the cathode over the contralateral region was compared to 10 sessions of sham-tDCS and four control sessions in a 20-year-old male individual with T21 presenting moderate-severe childhood apraxia of speech (CAS). The accuracy for NSW production progressively improved (gain of 40%) under tDCS (sham-tDCS and control sessions showed < 20% gain). A decrease in speech severity from moderate-severe to mild-moderate indicated transfer effects in speech production. Speech accuracy under tDCS was correlated with Wernicke's area activation (P3 current source density), which in turn was correlated with the activation of the left supramarginal gyrus and the Sylvian parietal-temporal junction. Repetitive bihemispheric tDCS paired with ReST may have facilitated speech sound acquisition in a young adult with T21 and CAS, possibly through activating brain regions required for phonological working memory.

12.
Cognition ; 244: 105663, 2024 03.
Article in English | MEDLINE | ID: mdl-38128322

ABSTRACT

Syllables are one of the fundamental building blocks of early language acquisition. From birth onwards, infants preferentially segment, process and represent the speech into syllable-sized units, raising the question of what type of computations infants are able to perform on these perceptual units. Syllables are abstract units structured in a way that allows grouping phonemes into sequences. The goal of this research was to investigate 4-to-5-month-old infants' ability to encode the internal structure of syllables, at a target age when the language system is not yet specialized on the sounds and the phonotactics of native languages. We conducted two experiments in which infants were first familiarized to lists of syllables implementing either CVC (consonant-vowel-consonant) or CCV (consonant-consonant-vowel) structures, then presented with new syllables implementing both structures at test. Experiments differ in the degree of phonological similarity between the materials used at familiarization and test. Results show that infants were able to differentiate syllabic structures at test, even when test syllables were implemented by combinations of phonemes that infants did not hear before. Only infants familiarized with CVC syllables discriminated the structures at test, pointing to a processing advantage for CVC over CCV structures. This research shows that, in addition to preferentially processing the speech into syllable-sized units, during the first months of life, infants are also capable of performing fine-grained computations within such units.


Subject(s)
Language Development , Speech Perception , Infant , Humans , Language , Speech , Linguistics , Hearing , Phonetics
13.
bioRxiv ; 2023 Dec 05.
Article in English | MEDLINE | ID: mdl-38106017

ABSTRACT

We investigated how neural oscillations code the hierarchical nature of stress rhythms in speech and how stress processing varies with language experience. By measuring phase synchrony of multilevel EEG-acoustic tracking and intra-brain cross-frequency coupling, we show the encoding of stress involves different neural signatures (delta rhythms = stress foot rate; theta rhythms = syllable rate), is stronger for amplitude vs. duration stress cues, and induces nested delta-theta coherence mirroring the stress-syllable hierarchy in speech. Only native English, but not Mandarin, speakers exhibited enhanced neural entrainment at central stress (2 Hz) and syllable (4 Hz) rates intrinsic to natural English. English individuals with superior cortical-stress tracking capabilities also displayed stronger neural hierarchical coherence, highlighting a nuanced interplay between internal nesting of brain rhythms and external entrainment rooted in language-specific speech rhythms. Our cross-language findings reveal brain-speech synchronization is not purely a "bottom-up" but benefits from "top-down" processing from listeners' language-specific experience.

14.
Psychon Bull Rev ; 2023 Sep 12.
Article in English | MEDLINE | ID: mdl-37700089

ABSTRACT

Syllable frequency effects in spoken word production have been interpreted as evidence that speakers store syllable-sized motor programmes for phonetic encoding in alphabetic languages such as English or Dutch. However, the cognitive mechanism underlying the syllable frequency effect in Chinese spoken word production remains unknown. To investigate the locus of the syllable frequency effect in spoken Chinese, this study used a picture-word interference (PWI) task in which participants were asked to name the picture while ignoring the distractor word. The design included two variables: the syllable frequency of the target words (high vs. low) and the phonological relationships between distractor and target words (shared atonic syllable or not; related vs. unrelated). We manipulated mixed token and type syllable frequency in Experiment 1, and token syllable frequency but controlled type syllable frequency in Experiment 2. The results showed a facilitation effect of mixed syllable frequency and a similar facilitation effect of token syllable frequency. Importantly, the syllable frequency effect was found to be independent of the phonological facilitation effect. These results suggest that token syllable frequency played a dominant role in the observed facilitation effect, providing evidence that the syllable frequency effect arises in the phonetic encoding of Chinese spoken word production.

15.
Proc Natl Acad Sci U S A ; 120(36): e2215710120, 2023 09 05.
Article in English | MEDLINE | ID: mdl-37639606

ABSTRACT

The beginnings of words are, in some informal sense, special. This intuition is widely shared, for example, when playing word games. Less apparent is whether the intuition is substantiated empirically and what the underlying organizational principle(s) might be. Here, we answer this seemingly simple question in a quantitatively clear way. Based on arguments about the interplay between lexical storage and speech processing, we examine whether the distribution of information among different speech sounds of words is governed by a critical computational unit for online speech perception and production: syllables. By analyzing lexical databases of twelve languages, we demonstrate that there is a compelling asymmetry between syllable beginnings (onsets) versus ends (codas) in their involvement in distinguishing words stored in the lexicon. In particular, we show that the functional advantage of syllable onset reflects an asymmetrical distribution of lexical informativeness within the syllable unit but not an effect of a global decay of informativeness from the beginning to the end of a word. The converging finding across languages from a range of typological families supports the conjecture that the syllable unit, while being a critical primitive for both speech perception and production, is also a key organizational constraint for lexical storage.


Subject(s)
Dissent and Disputes , Intuition , Humans , Databases, Factual , Language , Speech
16.
Biol Sex Differ ; 14(1): 49, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37528473

ABSTRACT

BACKGROUND: Behavioral sex differences are widespread in the animal world. These differences can be qualitative (i.e., behavior present in one sex but not the other, a true sex dimorphism) or quantitative (behavior is present at a higher rate or quality in one sex compared to the other). Singing in oscine songbirds is associated with both types of differences. In canaries, female rarely sing spontaneously but they can be induced to do so by treatments with steroids. Song in these females is, however, not fully masculinized and exhibits relatively subtle differences in quality as compared with male song. We analyzed here sex differences in syllable content and syllable use between singing male and female canaries. METHODS: Songs were recorded from three groups of castrated male and three groups of photoregressed female canaries that had received Silastic™ implants filled with testosterone (T), with T plus estradiol (E2), or left empty (control). After 6 weeks of hormone treatment, 30 songs were recorded from each of the 47 subjects. Songs were segmented and each syllable was annotated. Various metrics of syllable diversity were extracted and network analysis was employed to characterize syllable sequences. RESULTS: Male and female songs were characterized by marked sex differences related to syllable use. Compared to females, males had a larger syllable-type repertoire and their songs contained more syllable types. Network analysis of syllable sequences showed that males follow more fixed patterns of syllable transitions than females. Both sexes, however, produced song of the same duration containing the same number of syllables produced at similar rates (numbers per second). CONCLUSIONS: Under the influence of T, canaries of both sexes are able to produce generally similar vocalizations that nevertheless differ in specific ways. The development of song during ontogeny appears to be a very sophisticated process that is presumably based on genetic and endocrine mechanisms but also on specific learning processes. These data highlight the importance of detailed behavioral analyses to identify the many dimensions of a behavior that can differ between males and females.


Male canaries normally sing complex songs at high rate while females only rarely sing very simple songs. Testosterone induces active singing in both male and female canaries, but female song is still not fully masculinized by these treatments even if song duration does not differ between the sexes. We analyzed the syllable repertoire and the sequence of use for different syllables in canaries of both sexes treated with testosterone or testosterone supplemented with estradiol. Compared to females, males had a larger syllable-type repertoire and their songs contained more syllable types. Syllable transitions were also more fixed in males. Sex differences in adult singing of canaries are thus a complex mixture of differences that result from the different endocrine condition of males and females (and are thus partially reversed by administration of exogenous testosterone) and of more stable differences that presumably develop during the ontogenetic process under the influence of endocrine and genetic differences and of differential learning processes. Canary song thus represents an outstanding model system to analyze the interaction between nature and nurture in the acquisition of a sophisticated learned behavior as well as the mechanisms controlling sex differences in vocal learning and production.


Subject(s)
Canaries , Testosterone , Animals , Female , Male , Testosterone/pharmacology , Sex Characteristics , Vocalization, Animal , Learning
17.
Dev Neurorehabil ; 26(5): 309-319, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37401894

ABSTRACT

Cerebral palsy (CP) is a movement disorder and majority of children with CP have communication impairments which impact participation with this population. Rapid Syllable Transition Treatment (ReST) is a motor speech intervention primarily for children with Childhood Apraxia of Speech (CAS). A recent pilot study in which ReST was trialed with children with CP showed improved speech performance. Therefore, a single blind randomized controlled trial to compare ReST to usual care with 14 children with moderate-to-severe CP and dysarthria was conducted. ReST was provided on telehealth. ANCOVA with 95% confidence intervals indicated significant group differences in favor of ReST in speech accuracy (F = 5.1, p = .001), intelligibility (F = 2.8, p = .02) and communicative participation on both the FOCUS (F = 2, p = .02) and Intelligibility in Context Scale (F = 2.4, p = .04). ReST was found to be more effective than usual care.


Subject(s)
Cerebral Palsy , Humans , Child , Pilot Projects , Single-Blind Method , Speech , Communication , Speech Intelligibility
18.
Res Dev Disabil ; 140: 104575, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37515985

ABSTRACT

BACKGROUND: According to temporal sampling theory, deficits in rhythm processing contribute to both language and music difficulties in children with developmental language disorder (DLD). Evidence for this proposition is derived mainly from studies conducted in stress-timed languages, but the results may differ in languages with different rhythm features (e.g., syllable-timed languages). AIMS: This research aimed to study a previously unexamined topic, namely, the music skills of children with DLD who speak Spanish (a syllable-timed language), and to analyze the possible relationships between the language and music skills of these children. METHODS AND PROCEDURES: Two groups of 18 Spanish-speaking children with DLD and 19 typically-developing peers matched for chronological age completed a set of language tests. Their rhythm discrimination, melody discrimination and music memory skills were also assessed. OUTCOMES AND RESULTS: Children with DLD exhibited significantly lower performance than their typically-developing peers on all three music subtests. Music and language skills were significantly related in both groups. CONCLUSIONS AND IMPLICATIONS: The results suggest that similar music difficulties may be found in children with DLD whether they speak stress-timed or syllable-timed languages. The relationships found between music and language skills may pave the way for the design of possible language intervention programs based on music stimuli.


Subject(s)
Language Development Disorders , Music , Humans , Child , Language , Cognition , Language Tests
19.
Psychophysiology ; 60(11): e14362, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37350379

ABSTRACT

The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.


Subject(s)
Speech Perception , Speech , Humans , Magnetoencephalography , Noise , Cognition , Acoustic Stimulation , Speech Intelligibility
20.
Appl Neuropsychol Adult ; : 1-11, 2023 May 03.
Article in English | MEDLINE | ID: mdl-37134206

ABSTRACT

BACKGROUND: Patients with extensive left hemisphere damage frequently have ideational apraxia (IA) and transcortical sensory aphasia (TSA). Difficulty with action coordination, phonological processing, and complex motor planning may not be indicative of higher-order motor programming or higher-order complex formation. We report on the effects of IA and TSA on the visual and motor skill of stroke patients. PURPOSE: The study aims to address the question of whether IA and TSA in bilingual individuals are the results of an error of motor function alone or due to a combined motor plus and cognitive dysfunction effect. METHOD: Twelve bilingual patients (seven males, and five females) were diagnosed with IA and TSA, and are divided into two groups of six patients. Then, 12 healthy bilingual controls were evaluated for comparing with both groups. Bilingual aphasia testing (BAT) and appropriate behavioral evaluation were used to assess motor skills, including coordination, visual-motor testing, and phonological processing. RESULTS: Findings (pointing skills) show that the performance of the L1 and L2 languages are consistently significant (p < 0.001) in healthy individuals compared to the IA and TSA groups. Command skills for L1 and L2 languages were significantly higher in healthy individuals compared to IA and TSA controls (p < 0.001). Further, the orthographic skills of IA and TSA vs controls in both groups were significantly reduced (p < 0.01). Visual skills in the L1 language were significantly improved (p < 0.05) in IA and TSA patients compared to healthy controls after 2 months. Unlike orthographic skills which were improved in IA and TSA patients, languages in bilingual patients did not simultaneously improve. CONCLUSION: Dyspraxia is a condition that affects both motor and visual cognitive functions, and patients who have it often have less referred motor skills. The current dataset shows that accurate visual cognition requires both cognitive-linguistic and sensory-motor processes. Motor issues should be highlighted, and skills and functionality should be reinforced along with the significance of treatment between IA and TSA corresponding to age and education. This can be a good indicator for treating semantic disorders.

SELECTION OF CITATIONS
SEARCH DETAIL