Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38528247

RESUMO

Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people's performance on basic cognitive tasks, and examined whether they apply to questionnaires responses. (1) The worst performance rule (WPR) states that people's worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance. (2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents' performance for each item based on the difference between the observed and model-predicted response ("response error" scores). Analyzing data from 102 items (21 questionnaires) collected from a large-scale nationally representative sample of people aged 50+ years, we found robust associations of cognitive ability with a person's largest but not with their smallest response error scores (supporting the WPR), and stronger associations of cognitive ability with response errors for more complex than for less complex questions (supporting the TCH). Results replicated across two independent samples and six assessment waves. A latent variable of response errors estimated for the most complex items correlated .50 with a latent cognitive ability factor, suggesting that response patterns can be utilized to extract a rough indicator of general cognitive ability in survey research.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38460115

RESUMO

OBJECTIVES: Self-reported survey data are essential for monitoring the health and well-being of the population as it ages. For studies of aging to provide precise and unbiased results, it is necessary that the self-reported information meets high psychometric standards. In this study, we examined whether the quality of survey responses in panel studies of aging depends on respondents' cognitive abilities. METHODS: Over 17 million survey responses from 157,844 participants aged 50 years and older in 10 epidemiological studies of aging were analyzed. We derived 6 common statistical indicators of response quality from each participant's data and estimated the correlations with participants' cognitive test scores at each study wave. Effect sizes (correlations) were synthesized across studies, cognitive tests, and waves using individual participant data meta-analysis methods. RESULTS: Respondents with lower cognitive scores showed significantly more missing item responses (overall effect size ρ^ = -0.144), random measurement error (ρ^ = -0.192), Guttman errors (ρ^ = -0.233), multivariate outliers (ρ^ = -0.254), and acquiescent responses (ρ^ = -0.078); the overall effect for extreme responses (ρ^ = -0.045) was not significant. Effect sizes were consistent across studies, modes of survey administsration, and different cognitive functioning domains, although some cognitive domain specificity was also observed. DISCUSSION: Lower-quality responses among respondents with lower cognitive abilities add random and systematic errors to survey measures, reducing the reliability, validity, and reproducibility of survey study results in aging research.


Assuntos
Envelhecimento , Cognição , Humanos , Pessoa de Meia-Idade , Idoso , Reprodutibilidade dos Testes , Envelhecimento/psicologia , Inquéritos e Questionários , Cognição/fisiologia , Estudos Epidemiológicos
3.
Field methods ; 35(2): 87-99, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37799827

RESUMO

Researchers have become increasingly interested in response times to survey items as a measure of cognitive effort. We used machine learning to develop a prediction model of response times based on 41 attributes of survey items (e.g., question length, response format, linguistic features) collected in a large, general population sample. The developed algorithm can be used to derive reference values for expected response times for most commonly used survey items.

4.
J Gerontol B Psychol Sci Soc Sci ; 78(8): 1278-1283, 2023 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-36879431

RESUMO

OBJECTIVES: With the increase in web-based data collection, response times (RTs) for survey items have become a readily available byproduct in most online studies. We examined whether RTs in online questionnaires can prospectively discriminate between cognitively normal respondents and those with cognitive impairment, no dementia (CIND). METHOD: Participants were 943 members of a nationally representative internet panel, aged 50 and older. We analyzed RTs that were passively recorded as paradata for 37 surveys (1,053 items) administered online over 6.5 years. A multilevel location-scale model derived 3 RT parameters for each survey: (1) a respondent's average RT and 2 components of intraindividual RT variability addressing (2) systematic RT adjustments and (3) unsystematic RT fluctuations. CIND status was determined at the end of the 6.5-year period. RESULTS: All 3 RT parameters were significantly associated with CIND, with a combined predictive accuracy of area under the receiver-operating characteristic curve = 0.74. Slower average RTs, smaller systematic RT adjustments, and greater unsystematic RT fluctuations prospectively predicted a greater likelihood of CIND over periods of up to 6.5, 4.5, and 1.5 years, respectively. DISCUSSION: RTs for survey items are a potential early indicator of CIND, which may enhance analyses of predictors, correlates, and consequences of cognitive impairment in online survey research.


Assuntos
Transtornos Cognitivos , Disfunção Cognitiva , Humanos , Pessoa de Meia-Idade , Idoso , Transtornos Cognitivos/diagnóstico , Tempo de Reação , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/complicações , Inquéritos e Questionários
5.
JMIR Res Protoc ; 12: e44627, 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36809337

RESUMO

BACKGROUND: Accumulating evidence shows that subtle alterations in daily functioning are among the earliest and strongest signals that predict cognitive decline and dementia. A survey is a small slice of everyday functioning; nevertheless, completing a survey is a complex and cognitively demanding task that requires attention, working memory, executive functioning, and short- and long-term memory. Examining older people's survey response behaviors, which focus on how respondents complete surveys irrespective of the content being sought by the questions, may represent a valuable but often neglected resource that can be leveraged to develop behavior-based early markers of cognitive decline and dementia that are cost-effective, unobtrusive, and scalable for use in large population samples. OBJECTIVE: This paper describes the protocol of a multiyear research project funded by the US National Institute on Aging to develop early markers of cognitive decline and dementia derived from survey response behaviors at older ages. METHODS: Two types of indices summarizing different aspects of older adults' survey response behaviors are created. Indices of subtle reporting mistakes are derived from questionnaire answer patterns in a number of population-based longitudinal aging studies. In parallel, para-data indices are generated from computer use behaviors recorded on the backend server of a large web-based panel study known as the Understanding America Study (UAS). In-depth examinations of the properties of the created questionnaire answer pattern and para-data indices will be conducted for the purpose of evaluating their concurrent validity, sensitivity to change, and predictive validity. We will synthesize the indices using individual participant data meta-analysis and conduct feature selection to identify the optimal combination of indices for predicting cognitive decline and dementia. RESULTS: As of October 2022, we have identified 15 longitudinal ageing studies as eligible data sources for creating questionnaire answer pattern indices and obtained para-data from 15 UAS surveys that were fielded from mid-2014 to 2015. A total of 20 questionnaire answer pattern indices and 20 para-data indices have also been identified. We have conducted a preliminary investigation to test the utility of the questionnaire answer patterns and para-data indices for the prediction of cognitive decline and dementia. These early results are based on only a subset of indices but are suggestive of the findings that we anticipate will emerge from the planned analyses of multiple behavioral indices derived from many diverse studies. CONCLUSIONS: Survey response behaviors are a relatively inexpensive data source, but they are seldom used directly for epidemiological research on cognitive impairment at older ages. This study is anticipated to develop an innovative yet unconventional approach that may complement existing approaches aimed at the early detection of cognitive decline and dementia. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/44627.

6.
J Gerontol B Psychol Sci Soc Sci ; 78(2): 201-209, 2023 02 19.
Artigo em Inglês | MEDLINE | ID: mdl-36308489

RESUMO

OBJECTIVES: The Health and Retirement Study Telephone Interview for Cognitive Status (HRS TICS) score and its associated Langa-Weir cutoffs are widely used as indicators of cognitive status for research purposes in population-based studies. The classification is based on in-person and phone interviews of older individuals. Our purpose was to develop a corresponding classification for web-based self-administered assessments. METHODS: Participants were 925 members of a nationally representative internet panel, all aged 50 and older. We conducted (a) a phone interview comprised of cognitive items used to construct the HRS TICS score, and (b) a web counterpart with self-administered cognitive items, while also considering (c) other already administered web-based cognitive tests and instrumental activities of daily living survey questions, all from the same respondents. RESULTS: The web-administered HRS TICS items have only modest correlations with the same phone items, although neither mode showed universally higher scores than the other. Using latent variable modeling, we created a probability of cognitive impairment score for the web-based battery that achieved good correspondence to the phone Langa-Weir classification. DISCUSSION: The results permit analyses of predictors, correlates, and consequences of cognitive impairment in web surveys where relevant cognitive test and functional abilities items are available. We discuss challenges and caveats that may affect the findings.


Assuntos
Transtornos Cognitivos , Disfunção Cognitiva , Demência , Humanos , Pessoa de Meia-Idade , Idoso , Demência/psicologia , Transtornos Cognitivos/psicologia , Atividades Cotidianas , Testes Neuropsicológicos , Internet
7.
J Med Internet Res ; 24(5): e34347, 2022 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-35532966

RESUMO

BACKGROUND: Cognitive testing in large population surveys is frequently used to describe cognitive aging and determine the incidence rates, risk factors, and long-term trajectories of the development of cognitive impairment. As these surveys are increasingly administered on internet-based platforms, web-based and self-administered cognitive testing calls for close investigation. OBJECTIVE: Web-based, self-administered versions of 2 age-sensitive cognitive tests, the Stop and Go Switching Task for executive functioning and the Figure Identification test for perceptual speed, were developed and administered to adult participants in the Understanding America Study. We examined differences in cognitive test scores across internet device types and the extent to which the scores were associated with self-reported distractions in everyday environments in which the participants took the tests. In addition, national norms were provided for the US population. METHODS: Data were collected from a probability-based internet panel representative of the US adult population-the Understanding America Study. Participants with access to both a keyboard- and mouse-based device and a touch screen-based device were asked to complete the cognitive tests twice in a randomized order across device types, whereas participants with access to only 1 type of device were asked to complete the tests twice on the same device. At the end of each test, the participants answered questions about interruptions and potential distractions that occurred during the test. RESULTS: Of the 7410 (Stop and Go) and 7216 (Figure Identification) participants who completed the device ownership survey, 6129 (82.71% for Stop and Go) and 6717 (93.08% for Figure Identification) participants completed the first session and correctly responded to at least 70% of the trials. On average, the standardized differences across device types were small, with the absolute value of Cohen d ranging from 0.05 (for the switch score in Stop and Go and the Figure Identification score) to 0.13 (for the nonswitch score in Stop and Go). Poorer cognitive performance was moderately associated with older age (the absolute value of r ranged from 0.32 to 0.61), and this relationship was comparable across device types (the absolute value of Cohen q ranged from 0.01 to 0.17). Approximately 12.72% (779/6123 for Stop and Go) and 12.32% (828/6721 for Figure Identification) of participants were interrupted during the test. Interruptions predicted poorer cognitive performance (P<.01 for all scores). Specific distractions (eg, watching television and listening to music) were inconsistently related to cognitive performance. National norms, calculated as weighted average scores using sampling weights, suggested poorer cognitive performance as age increased. CONCLUSIONS: Cognitive scores assessed by self-administered web-based tests were sensitive to age differences in cognitive performance and were comparable across the keyboard- and touch screen-based internet devices. Distraction in everyday environments, especially when interrupted during the test, may result in a nontrivial bias in cognitive testing.


Assuntos
Disfunção Cognitiva , Humanos , Internet , Testes Neuropsicológicos , Probabilidade , Inquéritos e Questionários
8.
J Intell ; 11(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36662133

RESUMO

Monitoring of cognitive abilities in large-scale survey research is receiving increasing attention. Conventional cognitive testing, however, is often impractical on a population level highlighting the need for alternative means of cognitive assessment. We evaluated whether response times (RTs) to online survey items could be useful to infer cognitive abilities. We analyzed >5 million survey item RTs from >6000 individuals administered over 6.5 years in an internet panel together with cognitive tests (numerical reasoning, verbal reasoning, task switching/inhibitory control). We derived measures of mean RT and intraindividual RT variability from a multilevel location-scale model as well as an expanded version that separated intraindividual RT variability into systematic RT adjustments (variation of RTs with item time intensities) and residual intraindividual RT variability (residual error in RTs). RT measures from the location-scale model showed weak associations with cognitive test scores. However, RT measures from the expanded model explained 22−26% of the variance in cognitive scores and had prospective associations with cognitive assessments over lag-periods of at least 6.5 years (mean RTs), 4.5 years (systematic RT adjustments) and 1 year (residual RT variability). Our findings suggest that RTs in online surveys may be useful for gaining information about cognitive abilities in large-scale survey research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...