Your browser doesn't support javascript.
Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review.
Flanagan, Olivia; Chan, Amy; Roop, Partha; Sundram, Frederick.
  • Flanagan O; Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand.
  • Chan A; School of Pharmacy, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand.
  • Roop P; Faculty of Engineering, University of Auckland, Auckland, New Zealand.
  • Sundram F; Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand.
JMIR Mhealth Uhealth ; 9(9): e24352, 2021 09 17.
Article in English | MEDLINE | ID: covidwho-1443933
ABSTRACT

BACKGROUND:

Mood disorders are commonly underrecognized and undertreated, as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emergence of machine learning approaches, smartphones can be used to monitor speech patterns to help the diagnosis and monitoring of mood disorders.

OBJECTIVE:

The aim of this review is to synthesize research on using speech patterns from smartphones to diagnose and monitor mood disorders.

METHODS:

Literature searches of major databases, Medline, PsycInfo, EMBASE, and CINAHL, initially identified 832 relevant articles using the search terms "mood disorders", "smartphone", "voice analysis", and their variants. Only 13 studies met inclusion criteria use of a smartphone for capturing voice data, focus on diagnosing or monitoring a mood disorder(s), clinical populations recruited prospectively, and in the English language only. Articles were assessed by 2 reviewers, and data extracted included data type, classifiers used, methods of capture, and study results. Studies were analyzed using a narrative synthesis approach.

RESULTS:

Studies showed that voice data alone had reasonable accuracy in predicting mood states and mood fluctuations based on objectively monitored speech patterns. While a fusion of different sensor modalities revealed the highest accuracy (97.4%), nearly 80% of included studies were pilot trials or feasibility studies without control groups and had small sample sizes ranging from 1 to 73 participants. Studies were also carried out over short or varying timeframes and had significant heterogeneity of methods in terms of the types of audio data captured, environmental contexts, classifiers, and measures to control for privacy and ambient noise.

CONCLUSIONS:

Approaches that allow smartphone-based monitoring of speech patterns in mood disorders are rapidly growing. The current body of evidence supports the value of speech patterns to monitor, classify, and predict mood states in real time. However, many challenges remain around the robustness, cost-effectiveness, and acceptability of such an approach and further work is required to build on current research and reduce heterogeneity of methodologies as well as clinical evaluation of the benefits and risks of such approaches.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: Speech / Smartphone Type of study: Diagnostic study / Experimental Studies / Prognostic study / Randomized controlled trials / Reviews Topics: Variants Limits: Humans Language: English Journal: JMIR Mhealth Uhealth Year: 2021 Document Type: Article Affiliation country: 24352

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: Speech / Smartphone Type of study: Diagnostic study / Experimental Studies / Prognostic study / Randomized controlled trials / Reviews Topics: Variants Limits: Humans Language: English Journal: JMIR Mhealth Uhealth Year: 2021 Document Type: Article Affiliation country: 24352