Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Alzheimers Dis ; 97(1): 179-191, 2024.
Article in English | MEDLINE | ID: mdl-38108348

ABSTRACT

BACKGROUND: Previous research has shown that verbal memory accurately measures cognitive decline in the early phases of neurocognitive impairment. Automatic speech recognition from the verbal learning task (VLT) can potentially be used to differentiate between people with and without cognitive impairment. OBJECTIVE: Investigate whether automatic speech recognition (ASR) of the VLT is reliable and able to differentiate between subjective cognitive decline (SCD) and mild cognitive impairment (MCI). METHODS: The VLT was recorded and processed via a mobile application. Following, verbal memory features were automatically extracted. The diagnostic performance of the automatically derived features was investigated by training machine learning classifiers to distinguish between participants with SCD versus MCI/dementia. RESULTS: The ICC for inter-rater reliability between the clinical and automatically derived features was 0.87 for the total immediate recall and 0.94 for the delayed recall. The full model including the total immediate recall, delayed recall, recognition count, and the novel verbal memory features had an AUC of 0.79 for distinguishing between participants with SCD versus MCI/dementia. The ten best differentiating VLT features correlated low to moderate with other cognitive tests such as logical memory tasks, semantic verbal fluency, and executive functioning. CONCLUSIONS: The VLT with automatically derived verbal memory features showed in general high agreement with the clinical scoring and distinguished well between SCD and MCI/dementia participants. This might be of added value in screening for cognitive impairment.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Dementia , Humans , Reproducibility of Results , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Memory , Mental Recall , Neuropsychological Tests , Alzheimer Disease/psychology , Verbal Learning
2.
Digit Biomark ; 7(1): 115-123, 2023.
Article in English | MEDLINE | ID: mdl-37901366

ABSTRACT

Introduction: We studied the accuracy of the automatic speech recognition (ASR) software by comparing ASR scores with manual scores from a verbal learning test (VLT) and a semantic verbal fluency (SVF) task in a semiautomated phone assessment in a memory clinic population. Furthermore, we examined the differentiating value of these tests between participants with subjective cognitive decline (SCD) and mild cognitive impairment (MCI). We also investigated whether the automatically calculated speech and linguistic features had an additional value compared to the commonly used total scores in a semiautomated phone assessment. Methods: We included 94 participants from the memory clinic of the Maastricht University Medical Center+ (SCD N = 56 and MCI N = 38). The test leader guided the participant through a semiautomated phone assessment. The VLT and SVF were audio recorded and processed via a mobile application. The recall count and speech and linguistic features were automatically extracted. The diagnostic groups were classified by training machine learning classifiers to differentiate SCD and MCI participants. Results: The intraclass correlation for inter-rater reliability between the manual and the ASR total word count was 0.89 (95% CI 0.09-0.97) for the VLT immediate recall, 0.94 (95% CI 0.68-0.98) for the VLT delayed recall, and 0.93 (95% CI 0.56-0.97) for the SVF. The full model including the total word count and speech and linguistic features had an area under the curve of 0.81 and 0.77 for the VLT immediate and delayed recall, respectively, and 0.61 for the SVF. Conclusion: There was a high agreement between the ASR and manual scores, keeping the broad confidence intervals in mind. The phone-based VLT was able to differentiate between SCD and MCI and can have opportunities for clinical trial screening.

3.
Arch Clin Neuropsychol ; 38(5): 667-676, 2023 Jul 25.
Article in English | MEDLINE | ID: mdl-36705583

ABSTRACT

OBJECTIVE: To investigate whether automatic analysis of the Semantic Verbal Fluency test (SVF) is reliable and can extract additional information that is of value for identifying neurocognitive disorders. In addition, the associations between the automatically derived speech and linguistic features and other cognitive domains were explored. METHOD: We included 135 participants from the memory clinic of the Maastricht University Medical Center+ (with Subjective Cognitive Decline [SCD; N = 69] and Mild Cognitive Impairment [MCI]/dementia [N = 66]). The SVF task (one minute, category animals) was recorded and processed via a mobile application, and speech and linguistic features were automatically extracted. The diagnostic performance of the automatically derived features was investigated by training machine learning classifiers to differentiate SCD and MCI/dementia participants. RESULTS: The intraclass correlation for interrater reliability between the clinical total score (golden standard) and automatically derived total word count was 0.84. The full model including the total word count and the automatically derived speech and linguistic features had an Area Under the Curve (AUC) of 0.85 for differentiating between people with SCD and MCI/dementia. The model with total word count only and the model with total word count corrected for age showed an AUC of 0.75 and 0.81, respectively. Semantic switching correlated moderately with memory as well as executive functioning. CONCLUSION: The one-minute SVF task with automatically derived speech and linguistic features was as reliable as the manual scoring and differentiated well between SCD and MCI/dementia. This can be considered as a valuable addition in the screening of neurocognitive disorders and in clinical practice.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Dementia , Humans , Speech , Reproducibility of Results , Neuropsychological Tests , Linguistics , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Dementia/diagnosis , Alzheimer Disease/psychology
4.
J Alzheimers Dis ; 91(3): 1165-1171, 2023.
Article in English | MEDLINE | ID: mdl-36565116

ABSTRACT

BACKGROUND: Modern prodromal Alzheimer's disease (AD) clinical trials might extend outreach to a general population, causing high screen-out rates and thereby increasing study time and costs. Thus, screening tools that cost-effectively detect mild cognitive impairment (MCI) at scale are needed. OBJECTIVE: Develop a screening algorithm that can differentiate between healthy and MCI participants in different clinically relevant populations. METHODS: Two screening algorithms based on the remote ki:e speech biomarker for cognition (ki:e SB-C) were designed on a Dutch memory clinic cohort (N = 121) and a Swedish birth cohort (N = 404). MCI classification was each evaluated on the training cohort as well as on the unrelated validation cohort. RESULTS: The algorithms achieved a performance of AUC  0.73 and AUC  0.77 in the respective training cohorts and AUC  0.81 in the unseen validation cohorts. CONCLUSION: The results indicate that a ki:e SB-C based algorithm robustly detects MCI across different cohorts and languages, which has the potential to make current trials more efficient and improve future primary health care.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Humans , Speech , Alzheimer Disease/diagnosis , Cognitive Dysfunction/diagnosis , Machine Learning , Cognition , Biomarkers
5.
Digit Biomark ; 6(3): 107-116, 2022.
Article in English | MEDLINE | ID: mdl-36466952

ABSTRACT

Introduction: Progressive cognitive decline is the cardinal behavioral symptom in most dementia-causing diseases such as Alzheimer's disease. While most well-established measures for cognition might not fit tomorrow's decentralized remote clinical trials, digital cognitive assessments will gain importance. We present the evaluation of a novel digital speech biomarker for cognition (SB-C) following the Digital Medicine Society's V3 framework: verification, analytical validation, and clinical validation. Methods: Evaluation was done in two independent clinical samples: the Dutch DeepSpA (N = 69 subjective cognitive impairment [SCI], N = 52 mild cognitive impairment [MCI], and N = 13 dementia) and the Scottish SPeAk datasets (N = 25, healthy controls). For validation, two anchor scores were used: the Mini-Mental State Examination (MMSE) and the Clinical Dementia Rating (CDR) scale. Results: Verification: The SB-C could be reliably extracted for both languages using an automatic speech processing pipeline. Analytical Validation: In both languages, the SB-C was strongly correlated with MMSE scores. Clinical Validation: The SB-C significantly differed between clinical groups (including MCI and dementia), was strongly correlated with the CDR, and could track the clinically meaningful decline. Conclusion: Our results suggest that the ki:e SB-C is an objective, scalable, and reliable indicator of cognitive decline, fit for purpose as a remote assessment in clinical early dementia trials.

SELECTION OF CITATIONS
SEARCH DETAIL
...