Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 15787, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982177

RESUMO

Diagnostic tests for Parkinsonism based on speech samples have shown promising results. Although abnormal auditory feedback integration during speech production and impaired rhythmic organization of speech are known in Parkinsonism, these aspects have not been incorporated into diagnostic tests. This study aimed to identify Parkinsonism using a novel speech behavioral test that involved rhythmically repeating syllables under different auditory feedback conditions. The study included 30 individuals with Parkinson's disease (PD) and 30 healthy subjects. Participants were asked to rhythmically repeat the PA-TA-KA syllable sequence, both whispering and speaking aloud under various listening conditions. The results showed that individuals with PD had difficulties in whispering and articulating under altered auditory feedback conditions, exhibited delayed speech onset, and demonstrated inconsistent rhythmic structure across trials compared to controls. These parameters were then fed into a supervised machine-learning algorithm to differentiate between the two groups. The algorithm achieved an accuracy of 85.4%, a sensitivity of 86.5%, and a specificity of 84.3%. This pilot study highlights the potential of the proposed behavioral paradigm as an objective and accessible (both in cost and time) test for identifying individuals with Parkinson's disease.


Assuntos
Retroalimentação Sensorial , Doença de Parkinson , Fala , Humanos , Feminino , Masculino , Idoso , Doença de Parkinson/fisiopatologia , Doença de Parkinson/diagnóstico , Pessoa de Meia-Idade , Fala/fisiologia , Retroalimentação Sensorial/fisiologia , Projetos Piloto , Transtornos Parkinsonianos/fisiopatologia , Estudos de Casos e Controles
2.
medRxiv ; 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38853969

RESUMO

Amyotrophic lateral sclerosis (ALS) is a neurodegenerative motor neuron disease that causes progressive muscle weakness. Progressive bulbar dysfunction causes dysarthria and thus social isolation, reducing quality of life. The Everything ALS Speech Study obtained longitudinal clinical information and speech recordings from 292 participants. In a subset of 120 participants, we measured speaking rate (SR) and listener effort (LE), a measure of dysarthria severity rated by speech pathologists from recordings. LE intra- and inter-rater reliability was very high (ICC 0.88 to 0.92). LE correlated with other measures of dysarthria at baseline. LE changed over time in participants with ALS (slope 0.77 pts/month; p<0.001) but not controls (slope 0.005 pts/month; p=0.807). The slope of LE progression was similar in all participants with ALS who had bulbar dysfunction at baseline, regardless of ALS site of onset. LE could be a remotely collected clinically meaningful clinical outcome assessment for ALS clinical trials.

3.
Sci Rep ; 10(1): 3828, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32123186

RESUMO

Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips' dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.


Assuntos
Movimentos Oculares , Leitura , Adulto , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
4.
Phys Rev E ; 97(5-1): 052406, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29906900

RESUMO

Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.


Assuntos
Movimento , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
5.
PLoS One ; 13(3): e0193466, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29561853

RESUMO

Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.


Assuntos
Percepção Auditiva/fisiologia , Fonética , Física , Som , Percepção Visual/fisiologia , Voz/fisiologia , Adulto , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...