Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
JMIR Form Res ; 7: e42792, 2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36637896

ABSTRACT

BACKGROUND: The rising number of patients with dementia has become a serious social problem worldwide. To help detect dementia at an early stage, many studies have been conducted to detect signs of cognitive decline by prosodic and acoustic features. However, many of these methods are not suitable for everyday use as they focus on cognitive function or conversational speech during the examinations. In contrast, conversational humanoid robots are expected to be used in the care of older people to help reduce the work of care and monitoring through interaction. OBJECTIVE: This study focuses on early detection of mild cognitive impairment (MCI) through conversations between patients and humanoid robots without a specific examination, such as neuropsychological examination. METHODS: This was an exploratory study involving patients with MCI and cognitively normal (CN) older people. We collected the conversation data during neuropsychological examination (Mini-Mental State Examination [MMSE]) and everyday conversation between a humanoid robot and 94 participants (n=47, 50%, patients with MCI and n=47, 50%, CN older people). We extracted 17 types of prosodic and acoustic features, such as the duration of response time and jitter, from these conversations. We conducted a statistical significance test for each feature to clarify the speech features that are useful when classifying people into CN people and patients with MCI. Furthermore, we conducted an automatic classification experiment using a support vector machine (SVM) to verify whether it is possible to automatically classify these 2 groups by the features identified in the statistical significance test. RESULTS: We obtained significant differences in 5 (29%) of 17 types of features obtained from the MMSE conversational speech. The duration of response time, the duration of silent periods, and the proportion of silent periods showed a significant difference (P<.001) and met the reference value r=0.1 (small) of the effect size. Additionally, filler periods (P<.01) and the proportion of fillers (P=.02) showed a significant difference; however, these did not meet the reference value of the effect size. In contrast, we obtained significant differences in 16 (94%) of 17 types of features obtained from the everyday conversations with the humanoid robot. The duration of response time, the duration of speech periods, jitter (local, relative average perturbation [rap], 5-point period perturbation quotient [ppq5], difference of difference of periods [ddp]), shimmer (local, amplitude perturbation quotient [apq]3, apq5, apq11, average absolute differences between the amplitudes of consecutive periods [dda]), and F0cov (coefficient of variation of the fundamental frequency) showed a significant difference (P<.001). In addition, the duration of response time, the duration of silent periods, the filler period, and the proportion of fillers showed significant differences (P<.05). However, only jitter (local) met the reference value r=0.1 (small) of the effect size. In the automatic classification experiment for the classification of participants into CN and MCI groups, the results showed 66.0% accuracy in the MMSE conversational speech and 68.1% accuracy in everyday conversations with the humanoid robot. CONCLUSIONS: This study shows the possibility of early and simple screening for patients with MCI using prosodic and acoustic features from everyday conversations with a humanoid robot with the same level of accuracy as the MMSE.

2.
Stud Health Technol Inform ; 264: 343-347, 2019 Aug 21.
Article in English | MEDLINE | ID: mdl-31437942

ABSTRACT

Behavioral analysis for identifying changes in cognitive and physical functioning is expected to help detect dementia such as mild cognitive impairment (MCI) at an early stage. Speech and gait features have been especially recognized as behavioral biomarkers for dementia that possibly occur early in its course, including MCI. However, there are no studies investigating whether exploiting the combination of multimodal behavioral data could improve detection accuracy. In this study, we collected speech and gait behavioral data from Japanese seniors consisting of cognitively healthy adults and patients with MCI. Comparing the models using single modality behavioral data, we showed that the model using multimodal behavioral data could improve detection by up to 5.9%, achieving 82.4% accuracy (chance 55.9%). Our results suggest that the combination of multimodal behavioral features capturing different functional changes resulting from dementia might improve accuracy and help timely diagnosis at an early stage.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Gait , Humans , Speech
3.
Article in English | MEDLINE | ID: mdl-31258954

ABSTRACT

Early detection of dementia as well as improvement in diagnosis coverage has been increasingly important. Previous studies involved extracting speech features during neuropsychological assessments by humans, such as medical pro- fessionals, and succeeded in detecting patients with dementia and mild cognitive impairment (MCI). Enabling such assessment in an automated fashion by using computer devices would extend the range of application. In this study, we developed a tablet-based application for neuropsychological assessments and collected speech data from 44 Japanese native speakers including healthy controls (HCs) and those with MCI and dementia. We first extracted acoustic and phonetic features and showed that several features exhibited significant difference between HC vs. MCI and HC vs. dementia. We then constructed classification models by using these features and demonstrated that these models could differentiate MCI and dementia from HC with up to 82.4 and 92.6% accuracy, respectively.

4.
Stud Health Technol Inform ; 247: 301-305, 2018.
Article in English | MEDLINE | ID: mdl-29677971

ABSTRACT

Health monitoring in everyday situations has become important due to the rapid aging of many societies. Speech changes have been suggested as a means of measuring an individual's state, such as emotion and stress, and screening for neurodegenerative diseases. However, how speech features are associated with daily physical conditions remains unknown. In this study, we investigated whether acoustic features collected in everyday situations could be used for inferring the daily physical conditions of older adults. We analyzed speech data collected in two settings of monitoring the health of older adults: during phone calls with an actual service for regularly monitoring older adults and with a tablet-based monitoring system we developed. Through analyses, we suggest that acoustic features extracted from speech data in everyday situations may be used for detecting poor physical conditions.


Subject(s)
Acoustics , Monitoring, Physiologic , Speech , Aged , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...