Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1631-1635, 2021 11.
Article in English | MEDLINE | ID: mdl-34891598

ABSTRACT

While the psychological Stroop color test has frequently been used to analyze response delays in temporal cognitive processing, minimal research has examined incorrect/correct verbal test response pattern differences exhibited in healthy control and clinically depressed populations. Further, the development of speech error features with an emphasis on sequential Stroop test responses has been unexplored for automatic depression classification. In this study which uses speech recorded via a smart device, an analysis of и-gram error sequence distributions shows that participants with clinical depression produce more Stroop color test errors, especially sequential errors, than the healthy controls. By utilizing и-gram error features derived from multisession manual transcripts, experimentation shows that trigram error features generate up to 95% depression classification accuracy, whereas an acoustic feature baseline achieve only upwards of 75%. Moreover, и-gram error features using ASR transcripts produced up to 90% depression classification accuracy.


Subject(s)
Depressive Disorder, Major , Speech , Depression/diagnosis , Humans , Stroop Test
2.
J Healthc Inform Res ; 5(2): 201-217, 2021.
Article in English | MEDLINE | ID: mdl-33723525

ABSTRACT

Currently, there is an increasing global need for COVID-19 screening to help reduce the rate of infection and at-risk patient workload at hospitals. Smartphone-based screening for COVID-19 along with other respiratory illnesses offers excellent potential due to its rapid-rollout remote platform, user convenience, symptom tracking, comparatively low cost, and prompt result processing timeframe. In particular, speech-based analysis embedded in smartphone app technology can measure physiological effects relevant to COVID-19 screening that are not yet digitally available at scale in the healthcare field. Using a selection of the Sonde Health COVID-19 2020 dataset, this study examines the speech of COVID-19-negative participants exhibiting mild and moderate COVID-19-like symptoms as well as that of COVID-19-positive participants with mild to moderate symptoms. Our study investigates the classification potential of acoustic features (e.g., glottal, prosodic, spectral) from short-duration speech segments (e.g., held vowel, pataka phrase, nasal phrase) for automatic COVID-19 classification using machine learning. Experimental results indicate that certain feature-task combinations can produce COVID-19 classification accuracy of up to 80% as compared with using the all-acoustic feature baseline (68%). Further, with brute-forced n-best feature selection and speech task fusion, automatic COVID-19 classification accuracy of upwards of 82-86% was achieved, depending on whether the COVID-19-negative participant had mild or moderate COVID-19-like symptom severity.

SELECTION OF CITATIONS
SEARCH DETAIL