Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Arch Clin Neuropsychol ; 31(3): 231-45, 2016 May.
Article in English | MEDLINE | ID: mdl-26795609

ABSTRACT

The Victoria Symptom Validity Test (VSVT) is one of the most accurate performance validity tests. Previous research has recommended several cutoffs for performance invalidity classification on the VSVT. However, only one of these studies used a known groups design and no study has investigated these cutoffs in an exclusively mild traumatic brain injury (mTBI) medico-legal sample. The current study used a known groups design to validate VSVT cutoffs among mild traumatic brain injury litigants and explored the best approach for using the multiple recommended cutoffs for this test. Cutoffs of <18 Hard items correct, <41 Total items correct, an Easy - Hard items correct difference >6, and <5 items correct on any block yielded the strongest classification accuracy. Using multiple cutoffs in conjunction reduced classification accuracy. Given convergence across studies, a cutoff of <18 Hard items correct is the most appropriate for use with mTBI litigants.


Subject(s)
Brain Injuries, Traumatic/complications , Brain Injuries, Traumatic/diagnosis , Malingering/diagnosis , Neuropsychological Tests , Adult , Chi-Square Distribution , Female , Humans , Male , Middle Aged , Psychology, Clinical , Reproducibility of Results , Sensitivity and Specificity
2.
Appl Neuropsychol Adult ; 22(6): 399-406, 2015.
Article in English | MEDLINE | ID: mdl-25785544

ABSTRACT

Davis, Axelrod, McHugh, Hanks, and Millis (2013) documented that in a battery of 25 tests, producing 15, 10, and 5 abnormal scores at 1, 1.5, and 2 standard deviations below the norm-referenced mean, respectively, and an overall test battery mean (OTBM) of T ≤ 38 accurately identifies performance invalidity. However, generalizability of these findings to other samples and test batteries remains unclear. This study evaluated the use of abnormal scores and the OTBM as performance validity measures in a different sample that was administered a 25-test battery that minimally overlapped with Davis et al.'s test battery. Archival analysis of 48 examinees with mild traumatic brain injury seen for medico-legal purposes was conducted. Producing 18 or more, 7 or more, and 5 or more abnormal scores at 1, 1.5, and 2 standard deviations below the norm-referenced mean, respectively, and an OTBM of T ≤ 40 most accurately classified examinees; however, using Davis et al.'s proposed cutoffs in the current sample maintained specificity at or near acceptable levels. Due to convergence across studies, producing ≥5 abnormal scores at 2 standard deviations below the norm-referenced mean is the most appropriate cutoff for clinical implementation; however, for batteries consisting of a different quantity of tests than 25, an OTBM of T ≤ 38 is more appropriate.


Subject(s)
Brain Injuries/complications , Cognition Disorders/diagnosis , Cognition Disorders/etiology , Neuropsychological Tests , Adult , Data Curation/statistics & numerical data , Disability Evaluation , Female , Humans , Male , Middle Aged , Psychometrics , Reference Values , Reproducibility of Results , Sensitivity and Specificity
3.
Appl Neuropsychol Adult ; 22(5): 335-47, 2015.
Article in English | MEDLINE | ID: mdl-25584812

ABSTRACT

Several studies have documented improvements in the classification accuracy of performance validity tests (PVTs) when they are combined to form aggregated models. Fewer studies have evaluated the impact of aggregating additional PVTs and changing the classification threshold within these models. A recent Monte Carlo simulation demonstrated that to maintain a false-positive rate (FPR) of ≤.10, only 1, 4, 8, 10, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. The current study sought to evaluate these findings with embedded PVTs in a sample of real-life litigants and to highlight a potential danger in analytic flexibility with embedded PVTs. Results demonstrated that to maintain an FPR of ≤.10, only 3, 7, 10, 14, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. Analyzing more than these numbers of PVTs resulted in a dramatic increase in the FPR. In addition, in the most extreme case, flexibility in analyzing and reporting embedded PVTs increased the FPR by 67%. Given these findings, a more objective approach to analyzing and reporting embedded PVTs should be introduced.


Subject(s)
Data Interpretation, Statistical , Malingering/diagnosis , Neuropsychological Tests/statistics & numerical data , Task Performance and Analysis , Adult , Female , Humans , Male , Middle Aged , Monte Carlo Method
4.
Appl Neuropsychol Adult ; 22(4): 271-81, 2015.
Article in English | MEDLINE | ID: mdl-25402434

ABSTRACT

Embedded performance validity tests (PVTs) have been criticized for their poor specificity and sensitivity. Aggregated models of embedded PVTs have been proposed to improve their classification accuracy; however, limitations to aggregation-based improvement of PVTs have yet to be explored. The current study evaluated the classification accuracy of 3 types of models of embedded PVTs in the Halstead-Reitan Neuropsychological Battery for Adults (HRNB): a single-, a pairwise-, and a triple-failure model. In addition, this study evaluated the impact of aggregating between 1 and 6 embedded PVTs in each of these 3 types of models. Analyzing only the 2, 4, and 6 most discriminating embedded PVTs in the single-, pairwise-, and triple-failure models maximized classification accuracy, respectively. Comparisons across these models indicated that the single-failure model including only the two most discriminating embedded PVTs had the best classification accuracy; however, classification accuracy was only minimally improved in this model relative to analyzing just Reliable Digit Span. These results suggest that aggregation of embedded PVTs from the HRNB does not substantially improve their classification accuracy and that the benefits of aggregating PVTs may only emerge when the PVTs entered into the aggregated models have sufficient classification accuracy on their own.


Subject(s)
Concept Formation/physiology , Models, Psychological , Neuropsychological Tests , Adult , Female , Humans , Logistic Models , Male , Middle Aged , Sensitivity and Specificity
5.
Arch Clin Neuropsychol ; 29(5): 415-21, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25034265

ABSTRACT

Neuropsychological research frequently uses non-clinical undergraduate participants to evaluate neuropsychological tests. However, a recent study by An and colleagues (2012, Archives of Clinical Neuropsychology, 27, 849-857) called into question that the extent to which the interpretation of these participants' performance on neuropsychological tests is valid. This study found that in a sample of 36 participants, 55.6% exhibited performance invalidity at an initial session and 30.8% exhibited performance invalidity at a follow-up session. The current study attempted to replicate these findings in a larger, more representative sample using a more rigorous methodology. Archival data from 133 non-clinical undergraduate research participants were analyzed. Participants were classified as performance invalid if they failed any one PVT. In the current sample, only 2.26% of participants exhibited performance invalidity. Thus, concerns regarding insufficient effort and performance invalidity when using undergraduate research participants appear to be overstated.


Subject(s)
Malingering/diagnosis , Memory , Neuropsychological Tests , Female , Humans , Male
6.
Appl Neuropsychol Adult ; 21(1): 9-13, 2014.
Article in English | MEDLINE | ID: mdl-24826490

ABSTRACT

The Halstead Category Test is a popular measure of abstraction, concept formation, and logical analysis skills. Due to its large apparatus, however, ease of administration of the standard Category Test is limited. For this reason, a number of computer versions of the Category Test have been developed to facilitate its administration. The current study evaluated the equivalence of a new computer version to the standard Category Test in a sample of undergraduate students. Analyses revealed that the two versions did not differ significantly on subtest error scores, total error scores, or Neuropsychological Deficit Scale scores. Results of the current study support the equivalence of this new computer version to the standard version of the Category Test.


Subject(s)
Cognition/physiology , Concept Formation/physiology , Electronic Data Processing , Logic , Neuropsychological Tests , Weights and Measures , Adolescent , Female , Humans , Male , Reproducibility of Results , Young Adult
7.
Appl Neuropsychol Adult ; 20(4): 243-248, 2013.
Article in English | MEDLINE | ID: mdl-23530574

ABSTRACT

Accurate determination of performance validity is paramount in any neuropsychological assessment. Numerous freestanding symptom validity tests, like the Test of Memory Malingering (TOMM), have been developed to assist in this process; however, research and clinical experiences have suggested that each may not function with the same classification accuracy. In an effort to increase the TOMM's ability to accurately classify performance validity, recent research has investigated the use of nonstandard cutoff scores. The purpose of this study was to potentially validate the use of two, nonstandard cutoff scores (<49 on Trial 2 or the Retention Trial or ≤39 on Trial 1) applied to the TOMM in a medicolegal sample of mild traumatic brain injury litigants. Both descriptive and inferential statistics found that the cutoff of <49 on Trial 2 or the Retention Trial was the most sensitive to performance validity as compared with both the standard TOMM criteria and the cutoff of ≤39. These findings support the use of nonstandard cutoffs to increase the TOMM's classification accuracy.

8.
Arch Clin Neuropsychol ; 28(3): 213-21, 2013 May.
Article in English | MEDLINE | ID: mdl-23507448

ABSTRACT

Conation has been defined as the ability to focus and maintain intellectual energy over time. Prior research has shown that conation contributes to the magnitude of differences in test scores among brain-damaged and non-brain-damaged examinees. The purpose of the current investigation was to determine if conation might similarly account for differences in test scores among performance valid and performance invalid examinees. An archival analysis was therefore carried out on 52 examinees administered the Halstead-Reitan Neuropsychological Battery (HRNB) and several performance validity tests in a medico-legal context. Analyses revealed that conation had no impact on the magnitude of test score differences between groups and that performance invalid examinees scored worse than performance valid examinees on all but one test of the HRNB. These results support the idea that the identification of performance invalidity calls into question the reliability and the validity of all test score interpretations in an evaluation, even those with less conative load.


Subject(s)
Brain Injuries/psychology , Malingering/psychology , Neuropsychological Tests , Volition , Adult , Female , Humans , Jurisprudence , Male , Middle Aged , Psychomotor Performance
SELECTION OF CITATIONS
SEARCH DETAIL
...