Your browser doesn't support javascript.
Adapting to the COVID-19 pandemic in cohort studies: Validation of online assessments of cognition and neuropsychiatric symptoms in an aging population
Alzheimer's & Dementia ; 17(S6):e056606, 2021.
Article in English | Wiley | ID: covidwho-1589203
ABSTRACT
Background The occurrence of the COVID-19 pandemic has had a significant impact on cohort studies, particularly those whose subjects are at higher risk of developing complications from the virus. As such, assessment methods must be adapted to minimize COVID-19 exposure risk. The TRIAD (Translational Biomarkers of Aging and Dementia) cohort assessed N=292 individuals during initial COVID-19 lockdown measures by telephone interview to rate cognition, neuropsychiatric symptoms, and impact of the pandemic. To increase speed and efficiency of data collection, we aim to follow these individuals by means of online survey. Here, we present a validation of our online assessment tools by comparing data obtained through both methods (phone interview and online survey) in the same subjects. Methods 10 subjects (4 elderly CN/3 MCI/3 AD) and their informants participated in this study. Subjects were varied for assessment language (English/French) and first assessment method (phone/online). 18 instruments were administered (listed in Table 1). Instrument scores were first compared by computing individual differences (phone-online), then by pooling all scores by assessment type and calculating an effect size. Pearson correlation coefficient between phone and online scores was also computed. Results Mean interval between assessments was 8.8±4.8 days. Mean length of online assessment (63.7±20.7mins) was comparable to mean phone interview length (72.6±32.4mins). Instrument scores from phone interviews had a total mean of 102.60, while scores from online surveys had a total mean of 103.93, with a pooled SD of 716.09. Effect size was -0.00186. Correlation of phone and online scores yielded a Pearson?s R of 0.85 (p<0.05). Pearson?s R was also computed by applying bootstrapping using 1000 resamples without replacement with a sample size of 50. The Pearson R coefficient after bootstrapping was 0.91 (95% CI [0.7699-0.998]). Conclusion Our results suggest that instrument scores from phone and online assessments are comparable, and not significantly different from each other. The observed variance in scores between phone and online assessments may be due in part to the normal test-retest variability associated with re-administering instruments. This validation of online assessment tools in an aging population is of significant importance to human studies in the context of COVID-19.

Full text: Available Collection: Databases of international organizations Database: Wiley Type of study: Cohort study / Observational study / Prognostic study Language: English Journal: Alzheimer's & Dementia Year: 2021 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Wiley Type of study: Cohort study / Observational study / Prognostic study Language: English Journal: Alzheimer's & Dementia Year: 2021 Document Type: Article