Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 67(7): 2053-2076, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38924389

RESUMO

PURPOSE: This study explores speech motor planning in adults who stutter (AWS) and adults who do not stutter (ANS) by applying machine learning algorithms to electroencephalographic (EEG) signals. In this study, we developed a technique to holistically examine neural activity differences in speaking and silent reading conditions across the entire cortical surface. This approach allows us to test the hypothesis that AWS will exhibit lower separability of the speech motor planning condition. METHOD: We used the silent reading condition as a control condition to isolate speech motor planning activity. We classified EEG signals from AWS and ANS individuals into speaking and silent reading categories using kernel support vector machines. We used relative complexities of the learned classifiers to compare speech motor planning discernibility for both classes. RESULTS: AWS group classifiers require a more complex decision boundary to separate speech motor planning and silent reading classes. CONCLUSIONS: These findings indicate that the EEG signals associated with speech motor planning are less discernible in AWS, which may result from altered neuronal dynamics in AWS. Our results support the hypothesis that AWS exhibit lower inherent separability of the silent reading and speech motor planning conditions. Further investigation may identify and compare the features leveraged for speech motor classification in AWS and ANS. These observations may have clinical value for developing novel speech therapies or assistive devices for AWS.


Assuntos
Eletroencefalografia , Fala , Gagueira , Humanos , Gagueira/fisiopatologia , Gagueira/classificação , Eletroencefalografia/métodos , Adulto , Fala/fisiologia , Masculino , Feminino , Adulto Jovem , Leitura , Máquina de Vetores de Suporte , Aprendizado de Máquina
2.
PLoS One ; 18(2): e0281306, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36800358

RESUMO

The DIVA model is a computational model of speech motor control that combines a simulation of the brain regions responsible for speech production with a model of the human vocal tract. The model is currently implemented in Matlab Simulink; however, this is less than ideal as most of the development in speech technology research is done in Python. This means there is a wealth of machine learning tools which are freely available in the Python ecosystem that cannot be easily integrated with DIVA. We present TorchDIVA, a full rebuild of DIVA in Python using PyTorch tensors. DIVA source code was directly translated from Matlab to Python, and built-in Simulink signal blocks were implemented from scratch. After implementation, the accuracy of each module was evaluated via systematic block-by-block validation. The TorchDIVA model is shown to produce outputs that closely match those of the original DIVA model, with a negligible difference between the two. We additionally present an example of the extensibility of TorchDIVA as a research platform. Speech quality enhancement in TorchDIVA is achieved through an integration with an existing PyTorch generative vocoder called DiffWave. A modified DiffWave mel-spectrum upsampler was trained on human speech waveforms and conditioned on the TorchDIVA speech production. The results indicate improved speech quality metrics in the DiffWave-enhanced output as compared to the baseline. This enhancement would have been difficult or impossible to accomplish in the original Matlab implementation. This proof-of-concept demonstrates the value TorchDIVA can bring to the research community. Researchers can download the new implementation at: https://github.com/skinahan/DIVA_PyTorch.


Assuntos
Ecossistema , Fala , Humanos , Software , Simulação por Computador , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA