Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Assessment ; : 10731911231213849, 2023 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-38160401

RESUMO

Traditional validation processes for psychological surveys tend to focus on analyzing item responses instead of the cognitive processes that participants use to generate these responses. When screening for invalid responses, researchers typically focus on participants who manipulate their answers for personal gain or respond carelessly. In this paper, we introduce a new invalid response process, discordant responding, that arises when participants disagree with the use of the survey and discuss similarities and differences between this response style and protective responding. Results show that nearly all participants reflect on the intended uses of an assessment when responding to items and may decline to respond or modify their responses if they are not comfortable with the way the results will be used. Incidentally, we also find that participants may misread survey instructions if they are not interactive. We introduce a short screener to detect invalid responses, the discordant response identifiers (DRI), which provides researchers with a simple validity tool to use when validating surveys. Finally, we provide recommendations about how researchers may use these findings to design surveys that reduce this response manipulation in the first place.

2.
Multivariate Behav Res ; 58(1): 189-194, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36787513

RESUMO

To evaluate the fit of a confirmatory factor analysis model, researchers often rely on fit indices such as SRMR, RMSEA, and CFI. These indices are frequently compared to benchmark values of .08, .06, and .96, respectively, established by Hu and Bentler (Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55). However, these indices are affected by model characteristics and their sensitivity to misfit can change across models. Decisions about model fit can therefore be improved by tailoring cutoffs to each model. The methodological literature has proposed methods for deriving customized cutoffs, although it can require knowledge of linear algebra and Monte Carlo simulation. Given that many empirical researchers do not have training in these technical areas, empirical studies largely continue to rely on fixed benchmarks even though they are known to generalize poorly and can be poor arbiters of fit. To address this, this paper introduces the R package, dynamic, to make computation of dynamic fit index cutoffs (which are tailored to the user's model) more accessible to empirical researchers. dynamic heavily automatizes this process and only requires a lavaan object to automatically conduct several custom Monte Carlo simulations and output fit index cutoffs designed to be sensitive to misfit with the user's model characteristics.


Assuntos
Modelos Estatísticos , Simulação por Computador , Análise de Classes Latentes , Análise Fatorial , Método de Monte Carlo
3.
Psychol Methods ; 28(1): 61-88, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34694832

RESUMO

Model fit assessment is a central component of evaluating confirmatory factor analysis models and the validity of psychological assessments. Fit indices remain popular and researchers often judge fit with fixed cutoffs derived by Hu and Bentler (1999). Despite their overwhelming popularity, methodological studies have cautioned against fixed cutoffs, noting that the meaning of fit indices varies based on a complex interaction of model characteristics like factor reliability, number of items, and number of factors. Criticism of fixed cutoffs stems primarily from the fact that they were derived from one specific confirmatory factor analysis model and lack generalizability. To address this, we propose a simulation-based method called dynamic fit index cutoffs such that derivation of cutoffs is adaptively tailored to the specific model and data characteristics being evaluated. Unlike previously proposed simulation-based techniques, our method removes existing barriers to implementation by providing an open-source, Web based Shiny software application that automates the entire process so that users neither need to manually write any software code nor be knowledgeable about foundations of Monte Carlo simulation. Additionally, we extend fit index cutoff derivations to include sets of cutoffs for multiple levels of misspecification. In doing so, fit indices can more closely resemble their originally intended purpose as effect sizes quantifying misfit rather than improperly functioning as ad hoc hypothesis tests. We also provide an approach specifically designed for the nuances of 1-factor models, which have received surprisingly little attention in the literature despite frequent substantive interests in unidimensionality. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Software , Humanos , Reprodutibilidade dos Testes , Simulação por Computador , Análise Fatorial , Método de Monte Carlo
4.
Behav Res Methods ; 55(3): 1157-1174, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35585278

RESUMO

Assessing whether a multiple-item scale can be represented with a one-factor model is a frequent interest in behavioral research. Often, this is done in a factor analysis framework with approximate fit indices like RMSEA, CFI, or SRMR. These fit indices are continuous measures, so values indicating acceptable fit are up to interpretation. Cutoffs suggested by Hu and Bentler (1999) are a common guideline used in empirical research. However, these cutoffs were derived with intent to detect omitted cross-loadings or omitted factor covariances in multifactor models. These types of misspecifications cannot exist in one-factor models, so the appropriateness of using these guidelines in one-factor models is uncertain. This paper uses a simulation study to address whether traditional fit index cutoffs are sensitive to the types of misspecifications common in one-factor models. The results showed that traditional cutoffs have very poor sensitivity to misspecification in one-factor models and that the traditional cutoffs generalize poorly to one-factor contexts. As an alternative, we investigate the accuracy and stability of the recently introduced dynamic fit cutoff approach for creating fit index cutoffs for one-factor models. Simulation results indicated excellent performance of dynamic fit index cutoffs to classify correct or misspecified one-factor models and that dynamic fit index cutoffs are a promising approach for more accurate assessment of model fit in one-factor contexts.


Assuntos
Pesquisa Comportamental , Modelos Estatísticos , Humanos , Simulação por Computador , Análise Fatorial , Pesquisa Empírica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...