Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
JMIR Form Res ; 7: e42792, 2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36637896

ABSTRACT

BACKGROUND: The rising number of patients with dementia has become a serious social problem worldwide. To help detect dementia at an early stage, many studies have been conducted to detect signs of cognitive decline by prosodic and acoustic features. However, many of these methods are not suitable for everyday use as they focus on cognitive function or conversational speech during the examinations. In contrast, conversational humanoid robots are expected to be used in the care of older people to help reduce the work of care and monitoring through interaction. OBJECTIVE: This study focuses on early detection of mild cognitive impairment (MCI) through conversations between patients and humanoid robots without a specific examination, such as neuropsychological examination. METHODS: This was an exploratory study involving patients with MCI and cognitively normal (CN) older people. We collected the conversation data during neuropsychological examination (Mini-Mental State Examination [MMSE]) and everyday conversation between a humanoid robot and 94 participants (n=47, 50%, patients with MCI and n=47, 50%, CN older people). We extracted 17 types of prosodic and acoustic features, such as the duration of response time and jitter, from these conversations. We conducted a statistical significance test for each feature to clarify the speech features that are useful when classifying people into CN people and patients with MCI. Furthermore, we conducted an automatic classification experiment using a support vector machine (SVM) to verify whether it is possible to automatically classify these 2 groups by the features identified in the statistical significance test. RESULTS: We obtained significant differences in 5 (29%) of 17 types of features obtained from the MMSE conversational speech. The duration of response time, the duration of silent periods, and the proportion of silent periods showed a significant difference (P<.001) and met the reference value r=0.1 (small) of the effect size. Additionally, filler periods (P<.01) and the proportion of fillers (P=.02) showed a significant difference; however, these did not meet the reference value of the effect size. In contrast, we obtained significant differences in 16 (94%) of 17 types of features obtained from the everyday conversations with the humanoid robot. The duration of response time, the duration of speech periods, jitter (local, relative average perturbation [rap], 5-point period perturbation quotient [ppq5], difference of difference of periods [ddp]), shimmer (local, amplitude perturbation quotient [apq]3, apq5, apq11, average absolute differences between the amplitudes of consecutive periods [dda]), and F0cov (coefficient of variation of the fundamental frequency) showed a significant difference (P<.001). In addition, the duration of response time, the duration of silent periods, the filler period, and the proportion of fillers showed significant differences (P<.05). However, only jitter (local) met the reference value r=0.1 (small) of the effect size. In the automatic classification experiment for the classification of participants into CN and MCI groups, the results showed 66.0% accuracy in the MMSE conversational speech and 68.1% accuracy in everyday conversations with the humanoid robot. CONCLUSIONS: This study shows the possibility of early and simple screening for patients with MCI using prosodic and acoustic features from everyday conversations with a humanoid robot with the same level of accuracy as the MMSE.

2.
Front Digit Health ; 3: 653904, 2021.
Article in English | MEDLINE | ID: mdl-34713127

ABSTRACT

Health-monitoring technologies for automatically detecting the early signs of Alzheimer's disease (AD) have become increasingly important. Speech responses to neuropsychological tasks have been used for quantifying changes resulting from AD and differentiating AD and mild cognitive impairment (MCI) from cognitively normal (CN). However, whether and how other types of speech tasks with less burden on older adults could be used for detecting early signs of AD remains unexplored. In this study, we developed a tablet-based application and compared speech responses to daily life questions with those to neuropsychological tasks in terms of differentiating MCI from CN. We found that in daily life questions, around 80% of speech features showing significant differences between CN and MCI overlapped those showing significant differences in both our study and other studies using neuropsychological tasks, but the number of significantly different features as well as their effect sizes from life questions decreased compared with those from neuropsychological tasks. On the other hand, the results of classification models for detecting MCI by using the speech features showed that daily life questions could achieve high accuracy, i.e., 86.4%, comparable to neuropsychological tasks by using eight questions against all five neuropsychological tasks. Our results indicate that, while daily life questions may elicit weaker but statistically discernable differences in speech responses resulting from MCI than neuropsychological tasks, combining them could be useful for detecting MCI with comparable performance to using neuropsychological tasks, which could help develop health-monitoring technologies for early detection of AD in a less burdensome manner.

3.
Sensors (Basel) ; 21(10)2021 May 12.
Article in English | MEDLINE | ID: mdl-34066269

ABSTRACT

A series of eating behaviors, including chewing and swallowing, is considered to be crucial to the maintenance of good health. However, most such behaviors occur within the human body, and highly invasive methods such as X-rays and fiberscopes must be utilized to collect accurate behavioral data. A simpler method of measurement is needed in healthcare and medical fields; hence, the present study concerns the development of a method to automatically recognize a series of eating behaviors from the sounds produced during eating. The automatic detection of left chewing, right chewing, front biting, and swallowing was tested through the deployment of the hybrid CTC/attention model, which uses sound recorded through 2ch microphones under the ear and weak labeled data as training data to detect the balance of chewing and swallowing. N-gram based data augmentation was first performed using weak labeled data to generate many weak labeled eating sounds to augment the training data. The detection performance was improved through the use of the hybrid CTC/attention model, which can learn the context. In addition, the study confirmed a similar detection performance for open and closed foods.


Subject(s)
Deglutition , Mastication , Attention , Feeding Behavior , Humans , Sound
4.
Stud Health Technol Inform ; 264: 343-347, 2019 Aug 21.
Article in English | MEDLINE | ID: mdl-31437942

ABSTRACT

Behavioral analysis for identifying changes in cognitive and physical functioning is expected to help detect dementia such as mild cognitive impairment (MCI) at an early stage. Speech and gait features have been especially recognized as behavioral biomarkers for dementia that possibly occur early in its course, including MCI. However, there are no studies investigating whether exploiting the combination of multimodal behavioral data could improve detection accuracy. In this study, we collected speech and gait behavioral data from Japanese seniors consisting of cognitively healthy adults and patients with MCI. Comparing the models using single modality behavioral data, we showed that the model using multimodal behavioral data could improve detection by up to 5.9%, achieving 82.4% accuracy (chance 55.9%). Our results suggest that the combination of multimodal behavioral features capturing different functional changes resulting from dementia might improve accuracy and help timely diagnosis at an early stage.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Gait , Humans , Speech
5.
Article in English | MEDLINE | ID: mdl-31258954

ABSTRACT

Early detection of dementia as well as improvement in diagnosis coverage has been increasingly important. Previous studies involved extracting speech features during neuropsychological assessments by humans, such as medical pro- fessionals, and succeeded in detecting patients with dementia and mild cognitive impairment (MCI). Enabling such assessment in an automated fashion by using computer devices would extend the range of application. In this study, we developed a tablet-based application for neuropsychological assessments and collected speech data from 44 Japanese native speakers including healthy controls (HCs) and those with MCI and dementia. We first extracted acoustic and phonetic features and showed that several features exhibited significant difference between HC vs. MCI and HC vs. dementia. We then constructed classification models by using these features and demonstrated that these models could differentiate MCI and dementia from HC with up to 82.4 and 92.6% accuracy, respectively.

6.
J Affect Disord ; 225: 214-220, 2018 01 01.
Article in English | MEDLINE | ID: mdl-28841483

ABSTRACT

BACKGROUND: The voice carries various information produced by vibrations of the vocal cords and the vocal tract. Though many studies have reported a relationship between vocal acoustic features and depression, including mel-frequency cepstrum coefficients (MFCCs) which applied to speech recognition, there have been few studies in which acoustic features allowed discrimination of patients with depressive disorder. Vocal acoustic features as biomarker of depression could make differential diagnosis of patients with depressive state. In order to achieve differential diagnosis of depression, in this preliminary study, we examined whether vocal acoustic features could allow discrimination between depressive patients and healthy controls. METHODS: Subjects were 36 patients who met the criteria for major depressive disorder and 36 healthy controls with no current or past psychiatric disorders. Voices of reading out digits before and after verbal fluency task were recorded. Voices were analyzed using OpenSMILE. The extracted acoustic features, including MFCCs, were used for group comparison and discriminant analysis between patients and controls. RESULTS: The second dimension of MFCC (MFCC 2) was significantly different between groups and allowed the discrimination between patients and controls with a sensitivity of 77.8% and a specificity of 86.1%. The difference in MFCC 2 between the two groups reflected an energy difference of frequency around 2000-3000Hz. CONCLUSIONS: The MFCC 2 was significantly different between depressive patients and controls. This feature could be a useful biomarker to detect major depressive disorder. LIMITATIONS: Sample size was relatively small. Psychotropics could have a confounding effect on voice.


Subject(s)
Depressive Disorder, Major/diagnosis , Speech Acoustics , Voice Disorders/diagnosis , Adult , Aged , Discriminant Analysis , Female , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...