Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Assessment ; 31(2): 363-376, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37012706

ABSTRACT

OBJECTIVE: To replicate a seven-factor model previously reported for the Delis-Kaplan Executive Function System (D-KEFS). METHOD: This study used the D-KEFS standardization sample including 1,750 non-clinical participants. Several seven-factor models previously reported for the D-KEFS were re-evaluated using confirmatory factor analysis (CFA). Previously published bi-factor models were also tested. These models were compared with a three-factor a priori model based on Cattell-Horn-Carroll (CHC) theory. Measurement invariance was examined across three age cohorts. RESULTS: All previously reported models failed to converge when tested with CFA. None of the bi-factor models converged after large numbers of iterations, suggesting that bi-factor models are ill-suited to represent the D-KEFS scores as reported in the test manual. Although poor fit was initially observed for the three-factor CHC model, inspection of modification indices showed potential for improvement by including method effects via correlated residuals for scores derived from similar tests. The final CHC model showed good to excellent fit and strong metric measurement invariance across the three age cohorts with minor exceptions for a subset of Fluency parameters. CONCLUSIONS: CHC theory extends to the D-KEFS, supporting findings from previous studies that executive functions can be integrated into CHC theory.


Subject(s)
Executive Function , Humans , Factor Analysis, Statistical , Neuropsychological Tests
2.
Neuropsychol Rev ; 33(3): 643-652, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37594692

ABSTRACT

Dr. Leonhard presents a comprehensive and insightful critique of the existing malingering research literature and its implications for neuropsychological practice. Their statistical critique primarily focuses on the crucial issue of diagnostic inference when multiple tests are involved. While Leonhard effectively addresses certain misunderstandings, there are some overlooked misconceptions within the literature and a few new confusions were introduced. In order to provide a balanced commentary, this evaluation considers both Leonhard's critiques and the malingering research literature. Furthermore, a concise introduction to Bayesian diagnostic inference, utilizing the results of multiple tests, is provided. Misunderstandings regarding Bayesian inference are clarified, and a valid approach to Bayesian inference is elucidated. The assumptions underlying the simple Bayes model are thoroughly discussed, and it is demonstrated that the chained likelihood ratios method is an inappropriate application of this model due to one reason identified by Leonhard and another reason that has not been previously recognized. Leonhard's conclusions regarding the primary dependence of incremental validity on unconditional correlations and the alleged mathematical incorrectness of the simple Bayes model are refuted. Finally, potential directions for future research and practice in this field are explored and discussed.

3.
Appl Psychol Meas ; 43(8): 579-596, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31551637

ABSTRACT

Criterion-related validation of diagnostic test scores for a construct of interest is complicated by the unavailability of the construct directly. The standard method, Known Group Validation, assumes an infallible reference test in place of the construct, but infallible reference tests are rare. In contrast, Mixed Group Validation allows for a fallible reference test, but has been found to make strong assumptions not appropriate for the majority of diagnostic test validation studies. The Neighborhood model is adapted for the purpose of diagnostic test validation, which makes alternate, but also strong, assumptions. The statistical properties of the Neighborhood model are evaluated and the assumptions are reviewed in the context of diagnostic test validation. Alternatively, strong assumptions may be avoided by estimating only intervals for the validity estimates, instead of point estimates. The Method of Bounds is also adapted for the purpose of diagnostic test validation, and an extension, Method of Bounds-Test Validation, is introduced here for the first time. All three point-estimate methods were found to make strong assumptions concerning the conditional relationships between the tests and the construct of interest, and all three lack robustness to assumption violation. The Method of Bounds-Test Validation was found to perform well across a range of plausible simulated datasets where the point-estimate methods failed. The point-estimate methods are recommended in special cases where the assumptions can be justified, while the interval methods are appropriate more generally.

4.
J Exp Psychol Gen ; 145(2): 220-45, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26569128

ABSTRACT

Executive function is an important concept in neuropsychological and cognitive research, and is often viewed as central to effective clinical assessment of cognition. However, the construct validity of executive function tests is controversial. The switching, inhibition, and updating model is the most empirically supported and replicated factor model of executive function (Miyake et al., 2000). To evaluate the relation between executive function constructs and nonexplicitly executive cognitive constructs, we used confirmatory factor reanalysis guided by the comprehensive Cattell-Horn-Carroll (CHC) model of cognitive abilities. Data from 7 of the best studies supporting the executive function model were reanalyzed, contrasting executive function models and CHC models. Where possible, we examined the effect of specifying executive function factors in addition to the CHC factors. The results suggested that little evidence is available to support updating as a separate factor from general memory factors; that inhibition does not separate from general speed; and that switching is supported as a narrow factor under general speed, but with a more restricted definition than some clinicians and researchers have conceptualized. The replicated executive function factor structure was integrated with the larger body of research on individual difference in cognition, as represented by the CHC model.


Subject(s)
Aptitude/physiology , Executive Function/physiology , Inhibition, Psychological , Models, Psychological , Adult , Humans , Reproducibility of Results , Young Adult
5.
Assessment ; 21(2): 170-80, 2014 Apr.
Article in English | MEDLINE | ID: mdl-23362309

ABSTRACT

Mixed group validation (MGV) is a statistical model for estimating the diagnostic accuracy of tests. Unlike the more common approach to estimating criterion-related validity, known group validation (KGV), MGV does not require a perfect external validity criterion. The present article describes MGV by (a) specifying both the standard error associated with MGV validity estimates and the effect of assumption violation, (b) recommending required sample sizes under various study conditions, (c) evaluating whether assumption violation can be identified, and (d) providing a simulated example of an MGV with imperfect base rate estimates. It is concluded that MGV will always have a wider margin of error than KGV, MGV performs best when the research design approximates a KGV design, the effect of assumption violation depends on the severity of the assumption violation and also the value of the base rates, and that assumption violation may only be detected in severe cases.


Subject(s)
Data Interpretation, Statistical , Models, Statistical , Humans , Sample Size
6.
Psychol Assess ; 25(1): 204-15, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23025461

ABSTRACT

Mixed Group Validation (MGV) is an approach for estimating the diagnostic accuracy of tests. MGV is a promising alternative to the more commonly used Known Groups Validation (KGV) approach for estimating diagnostic accuracy. The advantage of MGV lies in the fact that the approach does not require a perfect external validity criterion or gold standard. However, the research designs where MGV is most appropriate have not been thoroughly explored. We give a brief description of the ideal research design to minimize error for MGV studies, test whether the MGV assumptions hold with clinical data, evaluate whether there is evidence of assumption violation among published MGV studies, give a practical description of the MGV assumptions, and describe an example of an optimal use of MGV. Ultimately, we conclude that MGV is not generally superior to KGV but may be used in some cases where the assumptions and standard error have been considered appropriately.


Subject(s)
Data Interpretation, Statistical , Research Design/standards , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...