Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 63
Filter
1.
J Clin Exp Neuropsychol ; 46(1): 6-15, 2024 02.
Article in English | MEDLINE | ID: mdl-38299800

ABSTRACT

INTRODUCTION: Performance validity test (PVT) failures occur in clinical practice and at higher rates with external incentives. However, little PVT research has been applied to the Long COVID population. This study aims to address this gap. METHODS: Participants were 247 consecutive individuals with Long COVID seen for neuropsychological evaluation who completed 4 PVTs and a standardized neuropsychological battery. The sample was 84.2% White and 66% female. The mean age was 51.16 years and mean education was 14.75 years. Medical records were searched for external incentive (e.g., disability claims). Three groups were created based on PVT failures (Pass [no failures], Intermediate [1 failure], and Fail [2+ failures]). RESULTS: A total of 8.9% participants failed 2+ PVTs, 6.4% failed one PVT, and 85% passed PVTs. From the full sample, 25.1% were identified with external incentive. However, there was a significant difference between the rates of external incentives in the Fail group (54.5%) compared to the Pass (22.1%) and Intermediate (20%) groups. Further, the Fail group had lower cognitive scores and higher frequency of impaired range scores, consistent with PVT research in other populations. External incentives were uncorrelated with cognitive performance. CONCLUSIONS: Consistent with other populations, results suggest Long COVID cases are not immune to PVT failure and external incentives are associated with PVT failure. Results indicated that individuals in the Pass and Intermediate groups showed no evidence for significant cognitive deficits, but the Fail group had significantly poorer cognitive performance. Thus, PVTs should be routinely administered in Long COVID cases and research.


Subject(s)
COVID-19 , Motivation , Neuropsychological Tests , Humans , Female , Male , Middle Aged , Neuropsychological Tests/standards , COVID-19/complications , Motivation/physiology , Adult , Aged , Post-Acute COVID-19 Syndrome , Cognitive Dysfunction/etiology , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/physiopathology , Cognition/physiology , Reproducibility of Results
2.
J Int Neuropsychol Soc ; 30(4): 410-419, 2024 May.
Article in English | MEDLINE | ID: mdl-38014547

ABSTRACT

OBJECTIVE: Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD: Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS: Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; ß = 0.16; p = .0002), and worse self-reported depression (ß = 0.17; p = .0001), anxiety (ß = 0.15; p = .0007), sleep (ß = 0.10; p = .0233), and functional outcomes (ß = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION: PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.


Subject(s)
Brain Injuries, Traumatic , Veterans , Humans , Veterans/psychology , Neuropsychological Tests , Anxiety/diagnosis , Anxiety/etiology , Memory, Short-Term , Reproducibility of Results , Malingering/diagnosis
3.
Appl Neuropsychol Adult ; : 1-12, 2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38065580

ABSTRACT

There appears to be a lack of consensus regarding how best to interpret cognitive test findings when there is a failure on only one Performance Validity Test (PVT). The current study examined the impact of failing one freestanding, forced-choice, memory-based (Fr-FC-MB) PVT across two memory measures in a large sample of veterans (N = 1,353). The impact of failing zero, one, or two Fr-FC-MB PVTs (Test of Memory Malingering Trial 1 or the Medical Symptom Validity Test) on subsequent memory measures was examined (California Verbal Learning Test-II [CVLT-II], Brief Visuospatial Memory Test-R [BVMT-R]). Compared to those failing zero PVTs, those failing one PVT showed significant declines across all memory indices with large average effect sizes (BVMT-R, d = -0.9, CVLT-II, d = -1.0). Those failing one PVT had memory scores more similar to those failing two PVTs. There is a need for greater nuance and flexibility when determining invalid test performance. The current findings, along with a brief review of the literature, find that failing even one Fr-FC-MB PVT dramatically (negatively) impacts memory performance. Results suggest that including individuals failing one Fr-FC-MB PVT into a credible group should be more closely scrutinized.

4.
J Intellect Disabil ; : 17446295231208399, 2023 Oct 28.
Article in English | MEDLINE | ID: mdl-37897741

ABSTRACT

The purpose is to test the applicability of the Positive and Negative Affect Scale (PANAS) to Chinese children with intellectual disabilities. The study was done by distributing the questionnaire to the parents through teachers online. Asked the parents to fill out the scale based on their observations of their children's daily life. The correlation coefficients between each item and the total score of the corresponding dimension ranged from 0.52 to 0.77. Factor analysis confirmed the establishment of the PA-NA two-factor structure of affect. A significant positive correlation existed between the NA and the challenging behavior. The Cronbach's α coefficient and split-half reliability of the PA scale were 0.87 and 0.85, and the Cronbach's α coefficient and split-half reliability of the NA scale were 0.85 and 0.83, respectively, higher than 0.80. It was concluded that PANAS has good applicability in Chinese children with intellectual disabilities.

5.
Neuropsychol Rev ; 33(3): 581-603, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37612531

ABSTRACT

Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.

6.
Neuropsychol Rev ; 33(3): 604-623, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37594690

ABSTRACT

Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.

7.
Neuropsychol Rev ; 33(3): 653-657, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37594691

ABSTRACT

The thoughtful commentaries in this volume of Drs. Bush, Jewsbury, and Faust add to the impact of the two reviews in this volume of statistical and methodological issues in the forensic neuropsychological determination of malingering based on performance and symptom validity tests (PVTs and SVTs). In his commentary, Dr. Bush raises, among others, the important question of whether such malingering determinations can still be considered as meeting the legal Daubert standard which is the basis for neuropsychological expert testimony. Dr. Jewsbury focuses mostly on statistical issues and agrees with two key points of the statistical review: Positive likelihood chaining is not a mathematically tenable method to combine findings of multiple PVTs and SVTs, and the Simple Bayes method is not applicable to malingering determinations. Dr. Faust adds important narrative texture to the implications for forensic neuropsychological practice and points to a need for research into factors other than malingering that may explain PVT and SVT failures. These commentaries put into even sharper focus the serious questions raised in the reviews about the scientific basis of present practices in the forensic neuropsychological determination of malingering.

8.
J Clin Exp Neuropsychol ; : 1-9, 2023 Aug 09.
Article in English | MEDLINE | ID: mdl-37555316

ABSTRACT

BACKGROUND: Although studies have shown unique variance contributions from performance invalidity, it is difficult to interpret the meaning of cognitive data in the setting of performance validity test (PVT) failure. The current study aimed to examine cognitive outcomes in this context. METHOD: Two hundred and twenty-two veterans with a history of mild traumatic brain injury referred for clinical evaluation completed cognitive and performance validity measures. Standardized scores were characterized as Within Normal Limits (≥16th normative percentile) and Below Normal Limits (<16th percentile). Cognitive outcomes are examined across four commonly used PVTs. Self-reported employment and student status were used as indicators of "productivity" to assess potential functional differences related to lower cognitive performance. RESULTS: Among participants who performed in the invalid range on Test of Memory Malingering trial 1, Word Memory Test, Wechsler Adult Intelligence Scale-Fourth Edition Digit Span aged corrected scaled score, and the California Verbal Learning Test-Second Edition Forced Choice index, 16-88% earned broadly within normal limits scores across cognitive testing. Depending on which PVT measure was applied, the average number of cognitive performances below the 16th percentile ranged from 5 to 7 of 14 tasks. There were no differences in the total number of below normal limits performances on cognitive measures between "productive" and "non-productive" participants (T = 1.65, p = 1.00). CONCLUSIONS: Results of the current study suggest that the range of within normal limits cognitive performance in the context of failed PVTs varies greatly. Importantly, our findings indicate that neurocognitive data may still provide important practical information regarding cognitive abilities, despite poor PVT outcomes. Further, given that rates of below normal limits cognitive performance did not differ among "productivity" groups, results have important implications for functional abilities and recommendations in a clinical setting.

9.
Eur J Neurol ; 30(4): 806-812, 2023 04.
Article in English | MEDLINE | ID: mdl-36692870

ABSTRACT

BACKGROUND AND PURPOSE: Performance validity tests (PVTs) are used in neuropsychological assessments to detect patterns of performance suggesting that the broader evaluation may be an invalid reflection of an individual's abilities. Data on functional motor disorder (FMD) are currently poor and conflicting. We aimed to examine the rate of failure on three different PVTs of nonlitigant, non-compensation-seeking FMD patients, and we compared their performance to that of healthy controls and controls asked to simulate malingering (healthy simulators). METHODS: We enrolled 29 nonlitigant, non-compensation-seeking patients with a clinical diagnosis of FMD, 29 healthy controls, and 29 healthy simulators. Three PVTs, the Coin in the Hand Test (CIH), the Rey 15-Item Test (REY), and the Finger Tapping Test (FTT), were employed. RESULTS: Functional motor disorder patients showed low rates of failure on the CIH and REY (7% and 10%, respectively) and slightly higher rates on the FTT (15%, n = 26), which implies a motor task. Their performance was statistically comparable to that of healthy controls but statistically different from that of healthy simulators (p < 0.001). Ninety-three percent of FMD patients, 7% of healthy simulators, and 100% of healthy controls passed at least two of the three tests. CONCLUSIONS: Performance validity test performance of nonlitigant, non-compensation-seeking patients with FMD ranged from 7% to 15%. Patients' performance was comparable to that of controls and significantly differed from that of simulators. This simple battery of three PVTs could be of practical utility and routinely used in clinical practice.


Subject(s)
Malingering , Humans , Reproducibility of Results , Neuropsychological Tests , Malingering/diagnosis , Malingering/psychology
10.
BMC Musculoskelet Disord ; 24(1): 26, 2023 Jan 12.
Article in English | MEDLINE | ID: mdl-36631834

ABSTRACT

OBJECTIVE: To translate and culturally adapt the Profile Fitness Mapping neck questionnaire (ProFitMap-neck) into the Chinese version and evaluate its psychometric properties. METHODS: The procedure of translation and cross-cultural adaptation was performed according to the recommended guidelines. A total of 220 patients with chronic neck pain (CNP) and 100 individuals without neck pain participated in the study. Internal consistency, test-retest reliability, content validity and construct validity were investigated. RESULTS: The Chinese version of ProFitMap-neck (CHN-ProFitMap-neck) showed adequate internal consistency (Cronbach's α = 0.88-0.95). A good test-retest reliability was proven by the intraclass correlation coefficient (ICC3A,1 = 0.78-0.86). Floor-ceiling effects were absent. Exploratory factor analysis revealed 6 factors for the symptom scale and 4 factors for the function scale. The CHN-ProFitMap-neck showed a moderate to high negative correlation with NDI (r = 0.46-0.60, P < 0.01), a small to moderate negative correlation with VAS (r = 0.29-0.36, P < 0.01), and a small to high positive correlation with SF-36 (r = 0.21-0.52, P < 0.01). No significant correlation between the CHN-ProFitMap-neck function scale and VAS (P > 0.05) or the mental health domain of the SF-36 was found (P > 0.05). The CHN-ProFitMap-neck scores were significantly lower in the CNP group than in the non-CNP group (P < 0.01). CONCLUSIONS: The CHN-ProFitMap-neck had acceptable psychometric properties and could be used as a reliable and valid instrument in the assessment of patients with chronic neck pain in mainland China.


Subject(s)
Chronic Pain , Neck Pain , Humans , Cross-Cultural Comparison , Reproducibility of Results , Disability Evaluation , Surveys and Questionnaires , Chronic Pain/diagnosis , Psychometrics
11.
Clin Neuropsychol ; 37(8): 1608-1628, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36646463

ABSTRACT

Objective: Performance Validity Tests (PVTs) have been used to identify non-credible performance in clinical, medicolegal, forensic, and, more recently, academic settings. The inclusion of PVTs when administering psychoeducational assessments is essential given that specific accommodation such as flexible deadlines and increased writing time can provide an external incentive for students without disabilities to feign symptoms. Method: The present study used archival data to establish base rates of non-credible performance in a sample of post-secondary students (n = 1045) who underwent a comprehensive psychoeducational evaluation for the purposes of obtaining academic accommodations. In accordance with current guidelines, non-credible performance was determined by failure on two or more freestanding or embedded PVTs. Results: 9.4% of participants failed at least two of the PVTs they were administered, of which 8.5% failed two PVTs, and approximately 1% failed three PVTs. Base rates of failure for specific PVTs ranged from 25% (b Test) to 11.2% (TOVA). Conclusions: The present study found a lower base rate of non-credible performance than previously observed in comparable populations. This likely reflects the utilization of conservative criteria in detecting non-credible performance to avoid false positives. By contrast, inconsistent base rates previously found in the literature may reflect inconsistent methodologies. These results further emphasize the importance of administering multiple PVTs during psychoeducational assessments. The implications of these findings can further inform clinicians administering assessments in academic settings and aid in the appropriate utilization of PVTs in psychoeducational evaluation to determine accessibility accommodations.

12.
Appl Neuropsychol Adult ; 30(5): 483-491, 2023.
Article in English | MEDLINE | ID: mdl-34428386

ABSTRACT

OBJECTIVE: The present study investigated demographic differences in performance validity test (PVT) failure in a Veteran sample. METHOD: Data were extracted from clinical neuropsychological evaluations. Only veterans who identified as men, as either European American/White (EA) or African American/Black (AA) were included (n = 1261). We investigated whether performance on two frequently used PVTs, the Test of Memory Malingering (TOMM), and the Medical Symptom Validity Test (MSVT), differed by age, education, and race using separate logistic regressions. RESULTS: Veterans with younger age, less education, and Veterans Affairs (VA) service-connected disability were significantly more likely to fail both PVTs. Race was not a significant predictor of MSVT failure, but AA patients were significantly more likely than EA patients to fail the TOMM. For all significant demographic predictors in the models, effects were small. In a subsample of patients who were given both PVTs (n = 461), the effects of race on performance remained. CONCLUSIONS: Performance on the TOMM and MSVT differed by age and level of education. Performance on the TOMM differed between EA and AA patients, whereas performance on the MSVT did not. These results suggest that demographic factors may play a small but measurable role in performance on specific PVTs.


Subject(s)
Malingering , Memory and Learning Tests , Male , Humans , Neuropsychological Tests , Malingering/diagnosis , Malingering/psychology , Educational Status , Demography , Reproducibility of Results
13.
Appl Neuropsychol Adult ; 30(6): 671-679, 2023.
Article in English | MEDLINE | ID: mdl-34491851

ABSTRACT

Performance validity tests (PVTs) are an integral part of neuropsychological assessments. Yet no studies have examined how Spanish-speaking forensic inpatients perform on PVTs, making it difficult to interpret these tests in this population. The present study examined archival data collected from monolingual Spanish-speaking forensic inpatients (n = 55; Mage = 49.6 years, SD = 12.0; 84.9% male; 93.5% diagnosed with a Psychotic Spectrum Disorder) to determine how this population performs on several PVTs. Most participants' scores on the Dot Counting Test (DCT; 82.2%; n = 45), Repeatable Battery for Assessment of Neuropsychological Status-Effort Index (RBANS EI; 84.4%; n = 33), and Test of Memory Malingering (TOMM; 79.1%; n = 43) were indicative of valid performance. Few participants, however, had Rey-15 Item Test (FIT) scores in the valid range (24.5% to 48.0%; Recall n = 50 and Combined n = 49, respectively); although FIT Recall specificity was improved when cutoff scores were lowered. Total years of education, but not other educational factors, were significantly associated with performance on PVTs (r = .33-.40, p = .01-.03). Study results suggest the DCT, TOMM, and RBANS EI may be more appropriate PVTs for Spanish-speaking forensic inpatients compared to the FIT.

14.
Clin Neuropsychol ; 37(2): 387-401, 2023 02.
Article in English | MEDLINE | ID: mdl-35387574

ABSTRACT

Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.


Subject(s)
Veterans , Humans , Veterans/psychology , Neuropsychological Tests , Self Report , Malingering/diagnosis , Malingering/psychology , Reproducibility of Results
15.
Front Psychiatry ; 13: 981475, 2022.
Article in English | MEDLINE | ID: mdl-36311526

ABSTRACT

Malingering of cognitive difficulties constitutes a major issue in psychiatric forensic settings. Here, we present a selective literature review related to the topic of cognitive malingering, psychopathology and their possible connections. Furthermore, we report a single case study of a 60-year-old man with a long and ongoing judicial history who exhibits a suspicious multi-domain neurocognitive disorder with significant reduction of autonomy in daily living, alongside a longtime history of depressive symptoms. Building on this, we suggest the importance of evaluating malingering conditions through both psychiatric and neuropsychological assessment tools. More specifically, the use of Performance Validity Tests (PVTs)-commonly but not quite correctly considered as tests of "malingering"-alongside the collection of clinical history and the use of routine psychometric testing, seems to be crucial in order to detect discrepancies between self-reported patient's symptoms, embedded validity indicators and psychometric results.

16.
Appl Neuropsychol Child ; : 1-7, 2022 Sep 14.
Article in English | MEDLINE | ID: mdl-36103363

ABSTRACT

The Memory Validity Profile (MVP) and Medical Symptom Validity Test (MSVT) are performance validity tests (PVTs) used to identify potential noncredible test performance during psychological evaluations. This study sought to examine the agreement between MVP and MSVT pass rates, as well as to determine if there are differences in MVP pass rates when using the cutoff score in the MVP professional manual compared with the experimental cutoff score of <31. Via retrospective review of records, 106 clients at a private neuropsychological clinic who had been given the MVP and the MSVT were identified. Results indicated that only one client met the manual cutoff scores, compared to 20 clients who failed the MSVT, raising concerns regarding the sensitivity of the MVP. Utilizing the receiver operator characteristic (ROC), curve analyses indicated fair discriminability of the MVP for the 106 participants (AUC = .717) with acceptable sensitivity (.50) and specificity (.92) for an MVP total score cutoff of <31. These findings support the utility of the experimental cut score in improving the sensitivity while maintaining adequate specificity in a clinically mixed population.

17.
Front Psychol ; 13: 989432, 2022.
Article in English | MEDLINE | ID: mdl-36033073

ABSTRACT

The rising demographic of older adults worldwide has led to an increase in dementia cases. In order to ensure the proper allocation of care and resources to this clinical group, it is necessary to correctly distinguish between simulated versus bona-fide cognitive deficits typical of dementia. Performance Validity Tests (PVTs) are specifically designed to assess a lack of effort and the possible simulation of cognitive impairment. Previous research demonstrates that PVTs may be sensitive to dementia, thus inaccurately classifying real memory impairment as simulation. Here, we analyzed the sensitivity of PVTs in discriminating between dementia and simulation using receiver operating characteristic (ROC) curve analyses. Further, we examined the potential need for adjusting cut-off scores for three stand-alone (Test of Memory Malingering, Rey-15 Item Memory Test, and Coin in Hand-Extended Version) and one embedded (Reliable Digit Span) PVT for Portuguese older adults with dementia. The results showed that (1) all measures, except for the Coin in Hand- Extended version (CIH-EV), were sensitive to one or more sociodemographic and/or cognitive variables, and (2) it was necessary to adjust cut-off points for all measures. Additionally, the Rey-15 Item Memory Test did not demonstrate sufficient discriminating capacity for dementia. These results present important implications for clinical practice and the daily life of patients, as the use of incorrect cut-off points could impede patients from getting the resources they need.

18.
Appl Neuropsychol Adult ; : 1-8, 2022 Aug 08.
Article in English | MEDLINE | ID: mdl-35940176

ABSTRACT

Dandachi-FitzGerald et al. (2022), published the article "Cry for help as a root cause of poor symptom validity: A critical note," in Applied Neuropsychology: Adult [Advance Online], arguing that the cry for help in forensic disability and related assessments is not a valid interpretation for poor symptom validity test results. This rebuttal contests the criticisms of the use of the cry for help in this context, as presented in Young (2019); "The Cry for help in a psychological injury and law: Concepts and review" that appeared in Psychological Injury and Law, Vol. 12, pp. 225-237. It calls for more programmatic research, for example, based on the cry for help questionnaire suggested by the author. In particular, it indicates, for example, that one SVT test failure in a test battery constitutes an assessment result that could allow for attributing the cry for help, everything else being equal. It suggests that the adaptational theory explains the cry for help as much as malingering. It suggests practice and court recommendations that will allow better rebuttals of unethical assessors who overuse/misuse/abuse the cry for help interpretation of poor symptom validity test results in forensic disability and related assessments.

19.
J Clin Exp Neuropsychol ; 44(1): 31-41, 2022 02.
Article in English | MEDLINE | ID: mdl-35670549

ABSTRACT

OBJECTIVE: The purpose of the present study was to compare performance on a wide range of PVTs in a neuropsychology clinic sample of African Americans and White Americans to determine if there are differences in mean scores or cut-off failure rates between the two groups, and to identify factors that may account for false positive PVT results in African American patients. METHOD: African American and White American non-compensation-seeking neuropsychology clinic patients were compared on a wide range of standalone and embedded PVTs: Dot Counting Test, b Test, Warrington Recognition Memory Test, Rey 15-item plus recognition, Rey Word Recognition Test, Digit Span (ACSS, RDS, 3-digit time, 4-digit time), WAIS-III Picture Completion (Most discrepant index), WAIS-III Digit Symbol/Coding (recognition equation), Rey Auditory Verbal Learning Test, Rey Complex figure, WMS-III Logical Memory, Comalli Stroop Test, Trails A, and Wisconsin Card Sorting Test. RESULTS: When groups were equated for age and education, African Americans obtained mean performances significantly worse than White Americans on only four of 25 PVT scores across the 14 different measures (Stroop Word Reading and Color Naming, Trails A, Digit Span 3-digit time); however, FSIQ was also significantly higher in White American patients. When subjects with borderline IQ (FSIQ = 70 to 79) were excluded (resulting in 74 White Americans and 25 African Americans), groups no longer differed in IQ and only continued to differ on a single PVT cutoff (Trails A). Further, specificity rates in African Americans were comparable to those of White Americans with the exception of the b Test, the Dot Counting Test, and Stroop B. CONCLUSIONS: PVT performance generally does not differ as a function of Black versus White race once the impact of intellectual level is controlled, and most PVT cutoffs appear appropriate for use in African Americans of low average IQ or higher.


Subject(s)
Black or African American , Neuropsychology , Humans , Neuropsychological Tests , Reproducibility of Results , Stroop Test , White People
20.
Appl Neuropsychol Adult ; 29(5): 1060-1067, 2022.
Article in English | MEDLINE | ID: mdl-33197371

ABSTRACT

OBJECTIVE: The objective of this study was to identify specific cutoff scores for three commonly used embedded performance validity tests (PVTs) for a Spanish speaking population. Culturally adapted cutoff scores for embedded PVTs were established using an analog study design. In addition, the psychometric properties of these measures when applying culturally adapted scores as compared to non-adapted scores were analyzed. METHOD: Participants (N = 114) were administered three embedded PVTs (Reliable Digit Span, Phonetic Fluency Test, and Animal Semantic Fluency Test) in a randomized order. Following an analog design, control participants were instructed to perform to the best of their abilities and the analog group was instructed to simulate cognitive impairment. RESULTS: In keeping with guidelines for specificity and sensitivity, the most culturally appropriate scores of ≤6, ≤27, and ≤16 were determined for the Reliable Digit Span, Phonetic Fluency Test, and the Semantic Fluency Test, respectively. CONCLUSIONS: This the first study addressing culturally sensitive cutoffs for commonly used embedded validity measures using a European Spanish population. While these findings cannot be generalized to forensic or clinical populations at the present time, they support the claim that specific cutoff scores that are sensitive to cultural variables are necessary in addressing embedded validity measures of the Reliable Digit Span, Phonetic Fluency Test, and Semantic Fluency Test.


Subject(s)
Cognitive Dysfunction , Humans , Neuropsychological Tests , Reproducibility of Results , Sensitivity and Specificity , Universities
SELECTION OF CITATIONS
SEARCH DETAIL
...