Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Stud Health Technol Inform ; 316: 924-928, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176943

RESUMO

In recent years, artificial intelligence, and machine learning (ML) models have advanced significantly, offering transformative solutions across diverse sectors. Emotion recognition in speech has particularly benefited from ML techniques, revolutionizing its accuracy and applicability. This article proposes a method for emotion detection in Romanian speech analysis by combining two distinct approaches: semantic analysis using GPT Transformer and acoustic analysis using openSMILE. The results showed an accuracy of 74% and a precision of almost 82%. Several system limitations were observed due to the limited and low-quality dataset. However, it also opened a new horizon in our research by analyzing emotions to identify mental health disorders.


Assuntos
Emoções , Interface para o Reconhecimento da Fala , Humanos , Romênia , Aprendizado de Máquina , Semântica , Inteligência Artificial
2.
Healthcare (Basel) ; 10(5)2022 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-35628071

RESUMO

Background: Depression and insomnia are highly related-insomnia is a common symptom among depression patients, and insomnia can result in depression. Although depression patients and insomnia patients should be treated with different approaches, the lack of practical biological markers makes it difficult to discriminate between depression and insomnia effectively. Purpose: This study aimed to disclose critical vocal features for discriminating between depression and insomnia. Methods: Four groups of patients, comprising six severe-depression patients, four moderate-depression patients, ten insomnia patients, and four patients with chronic pain disorder (CPD) participated in this preliminary study, which aimed to record their speaking voices. An open-source software, openSMILE, was applied to extract 384 voice features. Analysis of variance was used to analyze the effects of the four patient statuses on these voice features. Results: statistical analyses showed significant relationships between patient status and voice features. Patients with severe depression, moderate depression, insomnia, and CPD reacted differently to certain voice features. Critical voice features were reported based on these statistical relationships. Conclusions: This preliminary study shows the potential in developing discriminating models of depression and insomnia using voice features. Future studies should recruit an adequate number of patients to confirm these voice features and increase the number of data for developing a quantitative method.

3.
JMIR Ment Health ; 9(2): e31724, 2022 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-35147507

RESUMO

BACKGROUND: Emotions and mood are important for overall well-being. Therefore, the search for continuous, effortless emotion prediction methods is an important field of study. Mobile sensing provides a promising tool and can capture one of the most telling signs of emotion: language. OBJECTIVE: The aim of this study is to examine the separate and combined predictive value of mobile-sensed language data sources for detecting both momentary emotional experience as well as global individual differences in emotional traits and depression. METHODS: In a 2-week experience sampling method study, we collected self-reported emotion ratings and voice recordings 10 times a day, continuous keyboard activity, and trait depression severity. We correlated state and trait emotions and depression and language, distinguishing between speech content (spoken words), speech form (voice acoustics), writing content (written words), and writing form (typing dynamics). We also investigated how well these features predicted state and trait emotions using cross-validation to select features and a hold-out set for validation. RESULTS: Overall, the reported emotions and mobile-sensed language demonstrated weak correlations. The most significant correlations were found between speech content and state emotions and between speech form and state emotions, ranging up to 0.25. Speech content provided the best predictions for state emotions. None of the trait emotion-language correlations remained significant after correction. Among the emotions studied, valence and happiness displayed the most significant correlations and the highest predictive performance. CONCLUSIONS: Although using mobile-sensed language as an emotion marker shows some promise, correlations and predictive R2 values are low.

4.
Int J Bipolar Disord ; 9(1): 38, 2021 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-34850296

RESUMO

BACKGROUND: Voice features have been suggested as objective markers of bipolar disorder (BD). AIMS: To investigate whether voice features from naturalistic phone calls could discriminate between (1) BD, unaffected first-degree relatives (UR) and healthy control individuals (HC); (2) affective states within BD. METHODS: Voice features were collected daily during naturalistic phone calls for up to 972 days. A total of 121 patients with BD, 21 UR and 38 HC were included. A total of 107.033 voice data entries were collected [BD (n = 78.733), UR (n = 8004), and HC (n = 20.296)]. Daily, patients evaluated symptoms using a smartphone-based system. Affective states were defined according to these evaluations. Data were analyzed using random forest machine learning algorithms. RESULTS: Compared to HC, BD was classified with a sensitivity of 0.79 (SD 0.11)/AUC = 0.76 (SD 0.11) and UR with a sensitivity of 0.53 (SD 0.21)/AUC of 0.72 (SD 0.12). Within BD, compared to euthymia, mania was classified with a specificity of 0.75 (SD 0.16)/AUC = 0.66 (SD 0.11). Compared to euthymia, depression was classified with a specificity of 0.70 (SD 0.16)/AUC = 0.66 (SD 0.12). In all models the user dependent models outperformed the user independent models. Models combining increased mood, increased activity and insomnia compared to periods without performed best with a specificity of 0.78 (SD 0.16)/AUC = 0.67 (SD 0.11). CONCLUSIONS: Voice features from naturalistic phone calls may represent a supplementary objective marker discriminating BD from HC and a state marker within BD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA