Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Affect Disord ; 325: 627-632, 2023 03 15.
Article in English | MEDLINE | ID: mdl-36586600

ABSTRACT

BACKGROUND: Variations in speech intonation are known to be associated with changes in mental state over time. Behavioral vocal analysis is an algorithmic method of determining individuals' behavioral and emotional characteristics from their vocal patterns. It can provide biomarkers for use in psychiatric assessment and monitoring, especially when remote assessment is needed, such as in the COVID-19 pandemic. The objective of this study was to design and validate an effective prototype of automatic speech analysis based on algorithms for classifying the speech features related to MDD using a remote assessment system combining a mobile app for speech recording and central cloud processing for the prosodic vocal patterns. METHODS: Machine learning compared the vocal patterns of 40 patients diagnosed with MDD to the patterns of 104 non-clinical participants. The vocal patterns of 40 patients in the acute phase were also compared to 14 of these patients in the remission phase of MDD. RESULTS: A vocal depression predictive model was successfully generated. The vocal depression scores of MDD patients were significantly higher than the scores of the non-patient participants (p < 0.0001). The vocal depression scores of the MDD patients in the acute phase were significantly higher than in remission (p < 0.02). LIMITATIONS: The main limitation of this study is its relatively small sample size, since machine learning validity improves with big data. CONCLUSIONS: The computerized analysis of prosodic changes may be used to generate biomarkers for the early detection of MDD, remote monitoring, and the evaluation of responses to treatment.


Subject(s)
COVID-19 , Depressive Disorder, Major , Humans , Depressive Disorder, Major/diagnosis , Depressive Disorder, Major/epidemiology , Pandemics , Speech , Machine Learning
2.
JMIR Res Protoc ; 9(5): e13852, 2020 May 14.
Article in English | MEDLINE | ID: mdl-32406862

ABSTRACT

BACKGROUND: The prevalence of mental disorders worldwide is very high. The guideline-oriented care of patients depends on early diagnosis and regular and valid evaluation of their treatment to be able to quickly intervene should the patient's mental health deteriorate. To ensure effective treatment, the level of experience of the physician or therapist is of importance, both in the initial diagnosis and in the treatment of mental illnesses. Nevertheless, experienced physicians and psychotherapists are not available in enough numbers everywhere, especially in rural areas or in less developed countries. Human speech can reveal a speaker's mental state by altering its noncontent aspects (speech melody, intonations, speech rate, etc). This is noticeable in both the clinic and everyday life by having prior knowledge of the normal speech patterns of the affected person, and with enough time spent listening to the patient. However, this time and experience are often unavailable, leaving unused opportunities to capture linguistic, noncontent information. To improve the care of patients with mental disorders, we have developed a concept for assessing their most important mental parameters through a noncontent analysis of their active speech. Using speech analysis for the assessment and tracking of mental health patients opens up the possibility of remote, automatic, and ongoing evaluation when used with patients' smartphones, as part of the current trends toward the increasing use of digital and mobile health tools. OBJECTIVE: The primary objective of this study is to evaluate measurements of participants' mental state by comparing the analysis of noncontent speech parameters to the results of several psychological questionnaires (Symptom Checklist-90 [SCL-90], the Patient Health Questionnaire [PHQ], and the Big 5 Test). METHODS: In this paper, we described a case-controlled study (with a case group and one control group). The participants will be recruited in an outpatient neuropsychiatric treatment center. Inclusion criteria are a neurological or psychiatric diagnosis made by a specialist, no terminal or life-threatening illnesses, and fluent use of the German language. Exclusion criteria include psychosis, dementia, speech or language disorders in neurological diseases, addiction history, a suicide attempt recently or in the last 12 months, or insufficient language skills. The measuring instrument will be the VoiceSense digital voice analysis tool, which enables the analysis of 200 specific speech parameters, and the assessment of findings using psychometric instruments and questionnaires (SCL-90, PHQ, Big 5 Test). RESULTS: The study is ongoing as of September 2019, but we have enrolled 254 participants. There have been 161 measurements completed at timepoint 1, and a total of 62 participants have completed every psychological and speech analysis measurement. CONCLUSIONS: It appears that the tone and modulation of speech are as important, if not more so, than the content, and should not be underestimated. This is particularly evident in the interpretation of the psychological findings thus far acquired. Therefore, the application of a software analysis tool could increase the accuracy of finding assessments and improve patient care. TRIAL REGISTRATION: ClinicalTrials.gov NCT03700008; https://clinicaltrials.gov/ct2/show/NCT03700008. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/13852.

SELECTION OF CITATIONS
SEARCH DETAIL
...