Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 159: 106909, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37071937

RESUMO

Speech imagery has been successfully employed in developing Brain-Computer Interfaces because it is a novel mental strategy that generates brain activity more intuitively than evoked potentials or motor imagery. There are many methods to analyze speech imagery signals, but those based on deep neural networks achieve the best results. However, more research is necessary to understand the properties and features that describe imagined phonemes and words. In this paper, we analyze the statistical properties of speech imagery EEG signals from the KaraOne dataset to design a method that classifies imagined phonemes and words. With this analysis, we propose a Capsule Neural Network that categorizes speech imagery patterns into bilabial, nasal, consonant-vocal, and vowels/iy/ and/uw/. The method is called Capsules for Speech Imagery Analysis (CapsK-SI). The input of CapsK-SI is a set of statistical features of EEG speech imagery signals. The architecture of the Capsule Neural Network is composed of a convolution layer, a primary capsule layer, and a class capsule layer. The average accuracy reached is 90.88%±7 for bilabial, 90.15%±8 for nasal, 94.02%±6 for consonant-vowel, 89.70%±8 for word-phoneme, 94.33%± for/iy/ vowel and, 94.21%±3 for/uw/ vowel detection. Finally, with the activity vectors of the CapsK-SI capsules, we generated brain maps to represent brain activity in the production of bilabial, nasal, and consonant-vocal signals.


Assuntos
Interfaces Cérebro-Computador , Fala , Fala/fisiologia , Cápsulas , Eletroencefalografia/métodos , Redes Neurais de Computação , Encéfalo/fisiologia , Imaginação/fisiologia , Algoritmos
2.
IEEE Trans Serv Comput ; 15(3): 1220-1232, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35936760

RESUMO

In an attempt to reduce the infection rate of the COrona VIrus Disease-19 (Covid-19) countries around the world have echoed the exigency for an economical, accessible, point-of-need diagnostic test to identify Covid-19 carriers so that they (individuals who test positive) can be advised to self isolate rather than the entire community. Availability of a quick turn-around time diagnostic test would essentially mean that life, in general, can return to normality-at-large. In this regards, studies concurrent in time with ours have investigated different respiratory sounds, including cough, to recognise potential Covid-19 carriers. However, these studies lack clinical control and rely on Internet users confirming their test results in a web questionnaire (crowdsourcing) thus rendering their analysis inadequate. We seek to evaluate the detection performance of a primary screening tool of Covid-19 solely based on the cough sound from 8,380 clinically validated samples with laboratory molecular-test (2,339 Covid-19 positive and 6,041 Covid-19 negative) under quantitative RT-PCR (qRT-PCR) from certified laboratories. All collected samples were clinically labelled, i.e., Covid-19 positive or negative, according to the results in addition to the disease severity based on the qRT-PCR threshold cycle (Ct) and lymphocytes count from the patients. Our proposed generic method is an algorithm based on Empirical Mode Decomposition (EMD) for cough sound detection with subsequent classification based on a tensor of audio sonographs and deep artificial neural network classifier with convolutional layers called 'DeepCough'. Two different versions of DeepCough based on the number of tensor dimensions, i.e., DeepCough2D and DeepCough3D, have been investigated. These methods have been deployed in a multi-platform prototype web-app 'CoughDetect'. Covid-19 recognition results rates achieved a promising AUC (Area Under Curve) of [Formula: see text] 98 . 80 % ± 0 . 83 % , sensitivity of [Formula: see text] 96 . 43 % ± 1 . 85 % , and specificity of [Formula: see text] 96 . 20 % ± 1 . 74 % and average AUC of [Formula: see text] 81 . 08 % ± 5 . 05 % for the recognition of three severity levels. Our proposed web tool as a point-of-need primary diagnostic test for Covid-19 facilitates the rapid detection of the infection. We believe it has the potential to significantly hamper the Covid-19 pandemic across the world.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...