Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Res Methods ; 56(2): 577-599, 2024 Feb.
Article in English | MEDLINE | ID: mdl-36737580

ABSTRACT

It is common to model responses to surveys within latent variable frameworks (e.g., item response theory [IRT], confirmatory factor analysis [CFA]) and use model fit indices to evaluate model-data congruence. Unfortunately, research shows that people occasionally engage in careless responding (CR) when completing online surveys. While CR has the potential to negatively impact model fit, this issue has not been systematically explored. To better understand the CR-fit linkage, two studies were conducted. In study 1, participants' response behaviors were experimentally shaped and used to embed aspects of a comprehensive simulation (study 2) with empirically informed data. For this simulation, 144 unique conditions (which varied the sample size, number of items, CR prevalence, CR severity, and CR type), two latent variable models (IRT, CFA), and six model fit indices (χ2, RMSEA, SRMSR [CFA] and M2, RMSEA, SRMSR [IRT]), were examined. The results indicated that CR deteriorates model fit under most circumstances, though these effects are nuanced, variable, and contingent on many factors. These findings can be leveraged by researchers and practitioners to improve survey methods, obtain more accurate survey results, develop more precise theories, and enable more justifiable data-driven decisions.


Subject(s)
Surveys and Questionnaires , Humans , Factor Analysis, Statistical , Computer Simulation , Psychometrics/methods , Reproducibility of Results
2.
Front Psychol ; 13: 784471, 2022.
Article in English | MEDLINE | ID: mdl-35282217

ABSTRACT

Procrastination is a chronic and widespread problem; however, emerging work raises questions regarding the strength of the relationship between self-reported procrastination and behavioral measures of task engagement. This study assessed the internal reliability, concurrent validity, predictive validity, and psychometric properties of 10 self-report procrastination assessments using responses collected from 242 students. Participants' scores on each self-report instrument were compared to each other using correlations and cluster analysis. Lasso estimation was used to test the self-report scores' ability to predict two behavioral measures of delay (days to study completion; pacing style). The self-report instruments exhibited strong internal reliability and moderate levels of concurrent validity. Some self-report measures were predictive of days to study completion. No self-report measures were predictive of deadline action pacing, the pacing style most commonly associated with procrastination. Many of the self-report measures of procrastination exhibited poor fit. These results suggest that researchers should exercise caution in selecting self-report measures and that further study is necessary to determine the factors that drive misalignment between self-reports and behavioral measures of delay.

3.
Adv Physiol Educ ; 45(3): 626-633, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34379488

ABSTRACT

The National Institute of General Medical Sciences (NIGMS) mandates that its Centers of Biomedical Research Excellence (COBRE) and Institutional Development Award Networks of Biomedical Research Excellence (INBRE) institute formal mentoring programs to promote the core program objective of junior investigator development. Despite this NIGMS requirement, and the many career-related benefits associated with mentoring, few tools exist for purposes of rigorously evaluating COBRE and INBRE mentoring programs. The purpose of this project was to develop a mentoring assessment tool to aid in the evaluation of COBRE and INBRE mentoring programs. In study 1, a list of items comprising the tool was created via a multiphase item generation process based on input received from subject matter experts within the Cognitive and Neurobiological Approaches to Plasticity Center. In study 2, feedback about this tool was solicited from 78 grant directors, mentees, and mentors representing 21 unique COBRE programs and 8 unique INBRE programs from across the United States. The results provide initial evidence that this tool possesses suitable psychometric properties, is a flexible instrument with many potential uses, and represents a valuable resource for helping evaluate COBRE and INBRE mentoring programs. Having a tool for evaluating mentoring can help promote the grant success and career development of junior investigators in COBRE and INBRE programs and help program directors develop more sustainable research centers.


Subject(s)
Biomedical Research , Mentoring , Humans , Mentors , National Institute of General Medical Sciences (U.S.) , Program Evaluation , Research Personnel , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...