Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Clin Neuropsychol ; : 1-15, 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37881944

ABSTRACT

Objective: We examined work relative value units (wRVUs) and associated revenue of current procedural terminology (CPT) codes for evaluation and management (E&M) services, neuropsychological evaluation (NPE), psychological evaluation (PE), and psychotherapy. Method: CPT code wRVUs were aggregated for E&M (99202-99215), NPE (96116, 96132, 96133, 96136, and 96137), PE (90791, 96130, 96131, 96136, and 96137), and psychotherapy (90791 and 90832-90837 with and without the complexity modifier, 90785). Per minute wRVUs were calculated for each CPT code. The Centers for Medicare and Medicaid Services 2023 conversion factor ($33.8872) was multiplied by wRVUs to examine reimbursement per hour and per prototypical four-hour clinic slot. Results: The wRVUs per minute showed the following ranges: 0.032-0.07 for E&M services, 0.015-0.063 for NPE, 0.015-0.124 for PE, and 0.043-0.135 for psychotherapy. Average hourly revenue ranged from $72 for NPE to $132 for psychotherapy with the complexity modifier. Revenue for prototypical four-hour clinics ranged from $283 for NPE to $493 for psychotherapy with the complexity modifier. PE and psychotherapy services were valued at 124-184% of NPE. Conclusions: E&M code wRVUs increase with case complexity reflecting greater work intensity, and a modifier to PE and psychotherapy captures additional effort needed in complex cases. In contrast, NPE codes lack a complexity modifier, and NPE wRVUs are lower than those of PE and psychotherapy, the latter of which can be billed by master's level providers. NPE is undervalued compared to PE and psychotherapy based on wRVUs currently assigned to the CPT codes used for the respective services.

2.
Appl Neuropsychol Adult ; : 1-8, 2023 Jan 10.
Article in English | MEDLINE | ID: mdl-36628434

ABSTRACT

BACKGROUND: Cognitive deficits contribute to disability in Parkinson's disease (PD). Cognitive intra-individual variability (IIV) is associated with cognitive decline in age-related disorders, but IIV has not been related to functional ability in PD. We examined IIV in predicting functional ability in participants with PD. METHODS: De-identified National Alzheimer's Coordinating Center data (N = 1,228) from baseline and follow-up visits included participants with PD propensity score matched to control participants at baseline on age (M = 72), education (M = 15), and gender (28% female). PD symptom duration averaged 6 years. Outcome measures included the Functional Ability Questionnaire (FAQ), overall test battery mean (OTBM) of ten cognitive variables, IIV calculated as the standard deviation of cognitive data for each participant, Geriatric Depression Scale (GDS), and Unified PD Rating Scale gait and posture items. Baseline FAQ status in the PD group was predicted using logistic regression with age, education, cognition, GDS, and motor function as predictors. We compared baseline characteristics of PD participants with and without functional impairment at follow up. RESULTS: PD participants showed lower OTBM and greater IIV, GDS, and motor dysfunction than controls (p < .0001). Education, OTBM, IIV, GDS, and gait predicted functional status (77% overall classification; AUC = .84). PD participants with functional impairment at follow up showed significantly lower OTBM and greater IIV, GDS, and motor dysfunction at baseline (p < .001). CONCLUSION: IIV independently predicts functional status in participants with PD while controlling for other variables. PD participants with functional impairment at follow up showed greater IIV than those without functional impairment at follow up.

3.
Clin Neuropsychol ; 37(3): 475-490, 2023 04.
Article in English | MEDLINE | ID: mdl-35414332

ABSTRACT

Objective: This study presents data on the time cost and associated charges for common performance validity tests (PVTs). It also applies an approach from cost effectiveness research to comparison of tests that incorporates cost and classification accuracy. Method: A recent test usage survey was used to identify PVTs in common use among adult neuropsychologists. Data on test administration and scoring time were aggregated. Charges per test were calculated. A cost effectiveness approach was applied to compare pairs of tests from three studies using data on test administration time and classification accuracy operationalized as improvement in posterior probability beyond base rate. Charges per unit increase in posterior probability over base rate were calculated for base rates of invalidity ranging from 10 to 40%. Results: Ten commonly used PVTs measures showed a wide range in test administration and scoring time from 1 to 3 minutes to over 40 minutes with associated charge estimates from $4 to $284. Cost effectiveness comparisons illustrated the nuance in test selection and benefit of considering cost in relation to outcome rather than prioritizing time (i.e. cost minimization) classification accuracy alone. Conclusions: Findings extend recent research efforts to fill knowledge gaps related to the cost of neuropsychological evaluation. The cost effectiveness approach warrants further study in other samples with different neuropsychological and outcome measures.


Subject(s)
Neuropsychological Tests , Adult , Humans , Reproducibility of Results
4.
Appl Neuropsychol Adult ; : 1-9, 2022 Nov 23.
Article in English | MEDLINE | ID: mdl-36416227

ABSTRACT

INTRODUCTION: Embedded performance validity tests (PVTs) may show increased positive findings in racially diverse examinees. This study examined positive findings in an older adult sample of African American (AA) and European American (EA) individuals recruited as part of a study on aging and cognition. METHOD: The project involved secondary analysis of deidentified National Alzheimer's Coordinating Center data (N = 22,688). Exclusion criteria included diagnosis of dementia (n = 5,550), mild cognitive impairment (MCI; n = 5,160), impaired but not MCI (n = 1,126), other race (n = 864), and abnormal Mini Mental State Examination (MMSE < 25; n = 135). The initial sample included 9,853 participants (16.4% AA). Propensity score matching matched AA and EA participants on age, education, sex, and MMSE score. The final sample included 3,024 individuals with 50% of participants identifying as AA. Premorbid ability estimates were calculated based on demographics. Failure rates on five raw score and six age-adjusted scaled score PVTs were examined by race. RESULTS: Age, education, sex, MMSE, and premorbid ability estimate were not significantly different by race. Thirteen percent of AA and 3.8% of EA participants failed two or more raw score PVTs (p < .0001). On age-adjusted PVTs, 20.6% of AA and 5.9% of EA participants failed two or more (p < .0001). CONCLUSIONS: PVT failure rates were significantly higher among AA participants. Findings indicate a need for cautious interpretation of embedded PVTs with underrepresented groups. Adjustments to embedded PVT cutoffs may need to be considered to improve diagnostic accuracy.

5.
Appl Neuropsychol Adult ; 29(4): 810-815, 2022.
Article in English | MEDLINE | ID: mdl-32841074

ABSTRACT

The Boston Naming Test (BNT) has multiple short forms that do not include the noose item that have been primarily examined in dementia populations. This study compared BNT short forms with standard administration (BNT-S) in physical medicine and rehabilitation patients who underwent outpatient evaluation. The sample (N = 480) was 34% female and 91% white with average age of 46 years (SD = 15) and average education of 14 years (SD = 3). Five 15-item short forms were calculated: Consortium to Establish a Registry for Alzheimer's disease (CERAD-15); Lansing; and Mack 1, 2, and 4 (Mack-15.1, -15.2). Three 30-item short forms were calculated: Mack A, Saxon A, and BNT odd items. BNT-S and short forms were compared with Spearman correlations. Cronbach's alpha was calculated for all BNT forms. Impaired BNT scores were determined with norm-referenced scores (T < 36 and T < 40). Area under the curve (AUC) values were compared across short forms with impaired BNT as criterion. BNT-S showed strong correlations with 30-item (rho = 0.92-0.93) and 15-item short forms (rho = 0.80-0.87) except for CERAD-15 (rho = 0.69). Internal consistency was acceptable for all short forms (alpha = 0.72-0.86). BNT-S was impaired in 17% and 33% of participants at 35 T and 39 T cutoffs, respectively. BNT short forms showed excellent to outstanding classification accuracy predicting impairment using both cutoffs. BNT short forms warrant further study in rehabilitation settings.


Subject(s)
Alzheimer Disease , Female , Humans , Language Tests , Male , Middle Aged , Neuropsychological Tests
6.
Clin Neuropsychol ; 36(8): 2061-2072, 2022 11.
Article in English | MEDLINE | ID: mdl-34524072

ABSTRACT

OBJECTIVE: This study empirically examined if neuropsychological evaluation (NPE) is expensive compared to other diagnostic procedures such as neuroimaging. METHOD: We aggregated data on charges for NPE and common neuroimaging procedures (e.g., head CT and brain MRI) from hospitals in the U.S. Charges for five-hour NPE and eight-hour NPE were compared to charges for head CT and brain MRI, respectively. Difference scores were calculated between five-hour NPE and head CT and between eight-hour NPE and brain MRI. A charge difference of $250 or less was considered minimal. NPE and neuroimaging charges were compared across U.S. regions. RESULTS: Median head CT charges were $1942 to $2699. Median brain MRI charges were $3103 to $4487. Median five-hour NPE charges were $1855 to $1977. Median eight-hour NPE charges were $2757 to $2917. Head CT and five-hour NPE charges were not significantly different. Eight-hour NPE and brain MRI charges were not significantly different. Charge differences between five-hour NPE and head CT were minimal in 32.3% of hospitals. Charge differences between eight-hour NPE and brain MRI were minimal in 21.2% of hospitals. U.S. regions were not significantly different in charges for NPE or neuroimaging. CONCLUSIONS: Findings provide preliminary data on charges for NPE in relation to charges for common imaging procedures. NPE does not appear to be more expensive than neuroimaging and, in fact, appears comparable. Future research might expand the information on NPE charges to include additional settings.


Subject(s)
Magnetic Resonance Imaging , Neuroimaging , Humans , Neuropsychological Tests , Magnetic Resonance Imaging/methods
7.
Assessment ; 28(3): 994-1003, 2021 04.
Article in English | MEDLINE | ID: mdl-31718236

ABSTRACT

Objective: This study examined premorbid ability estimate concordance using Test of Premorbid Functioning predicted Full Scale Intelligent Quotient (TOPF-IQ) and Wide Range Achievement Test-Fourth Edition Word Reading (WRAT4-WR). Method: The sample (N = 145) was 28% female with average age and education of 40.6 and 13.2 years, respectively. Outpatient neuropsychological evaluations were conducted in a rehabilitation setting. Measures included the TOPF, WRAT4-WR, Wechsler Adult Intelligence Scale-Fourth Edition, and other neuropsychological tests. Non-WAIS measures defined impairment groups. Analyses included t tests, pairwise correlations, concordance correlation coefficients, and root mean square differences. Results: TOPF-IQ, WRAT4-WR, and Full Scale Intelligent Quotient scores were not significantly different but were lower than normative mean. TOPF-IQ and WRAT4-WR showed acceptable agreement (concordance correlation coefficient = .92; root mean square difference = 5.9). Greater premorbid-current ability differences were observed in the impaired group. TOPF-IQ and WRAT4-WR showed lower but similar agreement with Full Scale Intelligence Quotient in the unimpaired group. Conclusions: Findings support the WRAT4-WR in predicting premorbid ability in rehabilitation settings.


Subject(s)
Reading , Adult , Educational Status , Female , Humans , Intelligence Tests , Male , Neuropsychological Tests , Wechsler Scales
8.
Appl Neuropsychol Adult ; 28(5): 573-582, 2021.
Article in English | MEDLINE | ID: mdl-31530025

ABSTRACT

This study examined the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Cognitive Proficiency Index (CPI) in relation to other WAIS-IV indices, overall test battery mean (OTBM), and impairment (IMP) in an outpatient rehabilitation setting. Participants (N = 329) were 35% female and 88% Caucasian with average age and education of 42.9 (SD = 13.5) and 13.6 (SD = 2.4) years, respectively. Participants were grouped by diagnosis and validity: traumatic brain injury (TBI; n = 176; 39% mild), cerebrovascular accident (CVA; n = 52), other neurologic and psychiatric conditions (OTH; n = 49), and questionable performance validity (QPV; n = 52). OTBM was calculated from non-WAIS-IV tests; IMP was dichotomously defined as four or more non-WAIS-IV scores below cutoff (≤35 T). Significant group differences were observed on CPI, WAIS-IV indices, OTBM, and IMP. CPI significantly contributed (ß = .51) to a linear regression model predicting OTBM (R2 = .63) with education and GAI as covariates. A logistic regression model with IMP as the outcome and education, GAI, and CPI as predictors correctly classified 80% of cases with area under the curve of .86. A previously identified cutoff (CPI < 84) correctly classified 65-78% of clinical groups categorized by IMP. A novel cutoff (CPI ≤ 80) differentiated clinical participants with history of mild TBI from the QPV group with sensitivity of 44.2% and specificity of 89.7%. CPI showed incremental validity in predicting OTBM and IMP and warrants further study as a useful clinical addition to other WAIS-IV indices.


Subject(s)
Brain Injuries, Traumatic , Adult , Cognition , Female , Humans , Male , Neuropsychological Tests , Wechsler Scales
9.
Clin Neuropsychol ; 35(1): 115-132, 2021 01.
Article in English | MEDLINE | ID: mdl-32615854

ABSTRACT

Objective: The Covid-19 pandemic disrupted instructional activity in neuropsychology training programs. In response, the Association of Postdoctoral Programs in Clinical Neuropsychology (APPCN) launched a multisite didactic initiative (MDI). This manuscript describes the development and implementation of the MDI and presents findings from a recently conducted online survey concerning MDI participation.Methods: Faculty and trainees at APPCN member programs were recruited to complete the MDI survey, administered using the Qualtrics platform, through email announcements and via website link and on-screen quick response code shared at online didactic sessions. The MDI survey instrument was designed to capture basic demographics and professional role; information regarding level of site participation, benefits of participation, barriers to participation, online conference platform(s) used, and interest in continued participation; as well as anxiety and work engagement ratings.Results: The response rate was estimated to be 21-29%. Transition to videoconferencing for didactics was noted by 80% due to Covid-19, with 17% of respondents experiencing cancellation or reduction in didactic activities. About 79% endorsed that participation in MDI activities was always or nearly always beneficial. Barriers to participation included not having time, difficulty accessing didactic information, and not knowing about the MDI. Interestingly, trainees at nonparticipating sites reported greater anxiety than trainees at participating sites.Conclusion: It is hoped that these findings will inform future efforts to develop and implement online training activities. The benefits reported by respondents suggest that this work is warranted, while reported barriers to participation identify areas for improvement.


Subject(s)
COVID-19 , Education, Distance , Neuropsychology/education , Telecommunications , Adult , Education, Distance/organization & administration , Education, Distance/standards , Education, Distance/statistics & numerical data , Humans , Neuropsychology/statistics & numerical data , Surveys and Questionnaires , Telecommunications/organization & administration , Telecommunications/standards , Telecommunications/statistics & numerical data
10.
Clin Neuropsychol ; 34(7-8): 1251-1266, 2020.
Article in English | MEDLINE | ID: mdl-32723158

ABSTRACT

Objective: In light of the COVID-19 pandemic, a majority of clinicians have had to quickly and dramatically alter their clinical practices. Two surveys were administered on 3/26/2020 and 3/30/2020, respectively, to document immediate changes and challenges in clinical practice.Method: Two surveys were administered between 3/26/2020 and 3/30/2020, via SurveyMonkey and Google Forms, asking clinicians questions pertaining to practice issues during the early stages of the COVID-19 pandemic. Quantitative responses from the second survey were stratified by clinical setting (Medical Hospital vs. Private Practice) prior to analysis. Qualitative, free-response items were coded by the authors to better understand immediate changes in practice and other concerns.Results: 266 neuropsychologists completed Survey 1 and 230 completed Survey 2. Results suggest that practices immediately moved towards remote service provision. A meaningful proportion of clinicians and their staff were immediately affected economically by the pandemic, with clinicians in private practice differentially affected. Furthermore, a small but significant minority of respondents faced ethical dilemmas related to service provision and expressed concerns with initial communication from their employment organizations. Respondents requested clear best-practice guidelines from neuropsychological practice organizations.Conclusions: It is clear that field of neuropsychology has drastically shifted clinical practices in response to COVID-19 and is likely to continue to evolve. While these responses were collected in the early stages of stay-at-home orders, policy changes continue to occur and it is paramount that practice organizations consider the initial challenges expressed by clinicians when formulating practice recommendations and evaluating the clinical utility of telehealth services.


Subject(s)
Betacoronavirus , Coronavirus Infections/epidemiology , Coronavirus Infections/therapy , Neuropsychology/trends , Pandemics , Pneumonia, Viral/epidemiology , Pneumonia, Viral/therapy , Surveys and Questionnaires , Adolescent , Adult , COVID-19 , Child , Communication , Coronavirus Infections/psychology , Employment/methods , Employment/trends , Female , Humans , Male , Neuropsychological Tests , Neuropsychology/methods , Pneumonia, Viral/psychology , SARS-CoV-2 , Young Adult
11.
J Clin Exp Neuropsychol ; 40(10): 1013-1021, 2018 12.
Article in English | MEDLINE | ID: mdl-29779432

ABSTRACT

INTRODUCTION: This study examined false positive rates on embedded performance validity tests (PVTs) in older adults grouped by cognitive status. METHOD: The research design involved secondary analysis of data from the National Alzheimer's Coordinating Center database. Participants (N = 22,688) were grouped by cognitive status: normal (n = 10,319), impaired (n = 1,194), amnestic or nonamnestic mild cognitive impairment (MCI; n = 5,414), and dementia (n = 5,761). Neuropsychological data were used to derive 5 PVTs. RESULTS: False positive rates on individual PVTs ranged from 3.3 to 26.3% with several embedded PVTs showing acceptable specificity across groups. The proportion of participants failing two or more PVTs varied by cognitive status: normal (1.9%), impaired (6.6%), MCI (13.2%), and dementia (52.8%). Comparison of observed and predicted false positive rates at different specificity levels (.85 or .90) demonstrated significant differences in all comparisons. In normal and impaired groups, predicted rates were higher than observed rates. In the MCI group, predicted and observed comparisons varied: Predicted rates were higher with specificity at .85 and lower with specificity at .90. In the dementia group, predicted rates underestimated observed rates. CONCLUSIONS: Despite elevated false positives in conditions involving severe cognitive compromise, several measures retain acceptable specificity regardless of cognitive status. Predicted false positive rates based on the number of PVTs administered were not observed empirically. These findings do not support the utility of simulated data in predicting false positive rates in older adults.


Subject(s)
Neuropsychological Tests/standards , Psychomotor Performance , Aged , Aged, 80 and over , Alzheimer Disease/diagnosis , Alzheimer Disease/psychology , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Dementia/diagnosis , Dementia/psychology , False Positive Reactions , Female , Humans , Male , Mental Status and Dementia Tests , Middle Aged , Monte Carlo Method , Reproducibility of Results , Trail Making Test
12.
Arch Clin Neuropsychol ; 31(7): 802-810, 2016 Nov 22.
Article in English | MEDLINE | ID: mdl-27538439

ABSTRACT

OBJECTIVE: This study examined relationships among traumatic brain injury (TBI) severity, the Word Memory Test (WMT), and California Verbal Learning Test-Second Edition (CVLT-II). METHOD: Participants (N = 104) passed WMT validity indices and were categorized by TBI severity on the basis of medical records. Outcome measures included norm-referenced scores on the CVLT-II and WMT. RESULTS: Participants grouped by TBI severity significantly differed on the CVLT-II but not WMT. Post-traumatic amnesia (PTA) significantly correlated with the CVLT-II but not WMT. In a non-medicolegal sample subset (N = 61), TBI severity groups significantly differed on CVLT-II and WMT FR; PTA significantly correlated with the CVLT-II and WMT FR. CVLT-II impairment groups differed on all WMT variables. Participants grouped by neuroimaging findings differed on CVLT-II but not WMT. WMT FR predicted two-level TBI severity using logistic regression but did not contribute in a model including the CVLT-II. CONCLUSION: Overall, WMT memory subtests appeared less sensitive to TBI severity than the CVLT-II.

13.
Arch Clin Neuropsychol ; 30(2): 130-8, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25599723

ABSTRACT

The ability of both the non-credible score of the Rey Auditory Verbal Learning Test (RAVLT NC) and the recognition score of the RAVLT (RAVLT Recog) to predict credible versus non-credible neuropsychological test performance was examined. Credible versus non-credible group membership was determined according to diagnostic criteria with consideration of performance on two stand-alone performance validity tests. Findings from this retrospective data analysis of outpatients seen for neuropsychological testing within a Veterans Affairs Medical Center (N = 175) showed that RAVLT Recog demonstrated better classification accuracy than RAVLT NC in predicting credible versus non-credible neuropsychological test performance. Specifically, an RAVLT Recog cutoff of ≤9 resulted in reasonable sensitivity (48%) and acceptable specificity (91%) in predicting non-credible neuropsychological test performance. Implications for clinical practice are discussed. Note: The views contained here within are those of the authors and not representative of the institutions with which they are associated.


Subject(s)
Memory Disorders/diagnosis , Neuropsychological Tests , Verbal Learning/physiology , Adult , Age Factors , Aged , Dementia/complications , Dementia/diagnosis , Educational Status , Female , Hospitals, Veterans , Humans , Male , Malingering/diagnosis , Memory Disorders/etiology , Mental Recall/physiology , Middle Aged , Outpatients , Predictive Value of Tests , ROC Curve , Young Adult
14.
Clin Neuropsychol ; 28(8): 1224-9, 2014.
Article in English | MEDLINE | ID: mdl-25491099

ABSTRACT

Bilder, Sugar, and Helleman (2014 this issue) have criticized recent publications on performance validity test (PVT) failure in clinical samples. Bilder and colleagues appear to make an idiosyncratic interpretation of recent research and inconsistently apply principles of null hypothesis significance testing. Overall, their position seems to propose that PVTs should be held to a higher psychometric standard than conventional neuropsychological tests. Problematic aspects of these criticisms are discussed. Additional consideration is given to research aims and findings.


Subject(s)
Brain Injuries/diagnosis , Brain Injuries/psychology , Malingering/diagnosis , Neuropsychological Tests/statistics & numerical data , Female , Humans , Male
15.
Clin Neuropsychol ; 28(8): 1278-94, 2014.
Article in English | MEDLINE | ID: mdl-25372961

ABSTRACT

Word Choice (WC), a test in the Advanced Clinical Solutions package for Wechsler measures, was examined in two studies. The first study compared WC to the Recognition Memory Test-Words (RMT-W) in a clinical sample (N = 46). WC scores were significantly higher than RMT-W scores overall and in sample subsets grouped by separate validity indicators. In item-level analyses, WC items demonstrated lower frequency, greater imageability, and higher concreteness than RMT-W items. The second study explored WC classification accuracy in a different clinical sample grouped by separate validity indicators into Pass (n = 54), Fail-1 (n = 17), and Fail-2 (n = 8) groups. WC scores were significantly higher in the Pass group (M = 49.1, SD = 1.9) than in the Fail-1 (M = 46.0, SD = 5.3) and Fail-2 (M = 44.1, SD = 4.8) groups. WC demonstrated area under the curve of .81 in classifying Pass and Fail-2 participants. Using the test manual cutoff associated with a 10% false positive rate, sensitivity was 38% and specificity was 96% in Pass and Fail-2 groups with 24% of Fail-1 participants scoring below cutoff. WC may be optimally used in combination with other measures given observed sensitivity.


Subject(s)
Neuropsychological Tests , Recognition, Psychology , Vocabulary , Wechsler Scales , Adult , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
16.
Arch Clin Neuropsychol ; 29(8): 747-53, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25064762

ABSTRACT

This study compared the Word Memory Test (WMT) and California Verbal Learning Test-Second Edition (CVLT-II) in a sample (N = 76) of outpatient physiatry referrals who passed WMT validity indices. WMT and CVLT-II raw scores showed moderate to strong correlations. WMT scores were more likely to be below expectation than CVLT-II scores using norms from the respective test manuals. With impaired scores defined as 2 SDs below normative mean, the WMT and CVLT-II showed 67% overall agreement and kappa of 0.34. Forty percent of participants who scored within normal limits on the CVLT-II demonstrated an impaired score on the WMT. Despite evidence of utility, WMT memory subtests appear limited by current normative data.


Subject(s)
Brain Diseases/diagnosis , Memory/physiology , Neuropsychological Tests/standards , Verbal Learning/physiology , Adult , Female , Humans , Male , Middle Aged , Reference Values
17.
Clin Neuropsychol ; 28(5): 876-88, 2014.
Article in English | MEDLINE | ID: mdl-24738938

ABSTRACT

The Finger Tapping Test (FTT) has been presented as an embedded measure of performance validity in most standard neuropsychological evaluations. The present study evaluated the utility of three different scoring systems intended to detect invalid performance based on FTT. The scoring systems were evaluated in neuropsychology cases from clinical and independent practices, in which credible performance was determined based on passing all performance validity measures or failing two or more validity indices. Each FTT scoring method presented with specificity rates at approximately 90% and sensitivity of slightly more than 40%. When suboptimal performance was based on the failure of any of the three scoring methods, specificity was unchanged and sensitivity improved to 50%. The results are discussed in terms of the utility of combining multiple scoring measures for the same test as well as benefits of embedded measures administered over the duration of the evaluation.


Subject(s)
Cognition Disorders/diagnosis , Disability Evaluation , Fingers/physiology , Movement/physiology , Neuropsychological Tests/standards , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Middle Aged , Reference Values , Regression Analysis , Sensitivity and Specificity , Young Adult
18.
Clin Neuropsychol ; 28(2): 199-214, 2014.
Article in English | MEDLINE | ID: mdl-24528190

ABSTRACT

This study examined the relationship among performance validity test (PVT) failure, number of PVTs administered, and participant characteristics including demographic, diagnostic, functional, and contextual factors in a clinical sample (N = 158) of outpatient physiatry referrals. The number of PVTs failed and the number administered showed a small non-significant correlation (rs = .13, p = .10). Participant characteristics showed associations with PVT failure consistent with prior research. A negative binomial regression model was fitted using number of PVTs failed as outcome and age, education, number of PVTs administered, clinical versus medico-legal context, and functional status as predictors. Although education and functional status were significant predictors of number of PVTs failed, the number of PVTs administered was not. A second analytic approach focused on observed false positive rates in a neurologic no-incentive (NNI) sample subset (n = 87). In contrast to a recent proposal based on statistical simulation, observed false positive rates were lower than predicted rates in NNI participants administered six, seven, or eight PVTs using a two-PVT failure cutoff. These results are interpreted as mitigating concerns that increased PVT failure is necessarily the outcome of increased PVT administration.


Subject(s)
Neuropsychological Tests/statistics & numerical data , Adult , Aged , Educational Status , Female , Humans , Male , Middle Aged , Psychomotor Performance , Referral and Consultation , Reproducibility of Results , United States , White People/statistics & numerical data
19.
J Clin Exp Neuropsychol ; 35(4): 413-20, 2013.
Article in English | MEDLINE | ID: mdl-23514206

ABSTRACT

This study examined embedded performance validity indicators (PVI) based on the number of impaired scores in an evaluation and the overall test battery mean (OTBM). Adult participants (N = 175) reporting traumatic brain injury were grouped using eight PVI. Participants who passed all PVI (n = 67) demonstrated fewer impaired scores and higher OTBM than those who failed two or more PVI (n = 66). Impairment was defined at three levels: T scores < 40, 35, and 30. With specificity ≥.90, sensitivity ranged from .51 to .71 for number of impaired scores and .74 for OTBM.


Subject(s)
Brain Injuries/physiopathology , Cognition Disorders/physiopathology , Language Tests/statistics & numerical data , Neuropsychological Tests , Psychomotor Performance/physiology , Adult , Brain Injuries/complications , Cognition Disorders/etiology , Female , Humans , Male , Middle Aged , Neuropsychological Tests/statistics & numerical data , Reproducibility of Results , Sensitivity and Specificity , Severity of Illness Index , Wechsler Scales/statistics & numerical data
20.
Appl Neuropsychol Adult ; 20(2): 83-94, 2013.
Article in English | MEDLINE | ID: mdl-23397994

ABSTRACT

The Digit Span (DS) task in the Wechsler Adult Intelligence Scale-Fourth Edition differs substantially from earlier versions of the measure, with one of the major changes being the addition of a sequencing component. In the present investigation, the usefulness of the new sequencing task and other DS variables (i.e., DS Age-Scaled Score, DS Forward Total, DS Backward Total, and Reliable DS) was investigated with regard to the ability of these variables to predict negative response bias. Negative response bias was first defined and examined using below-cutoff performance on the Test of Memory Malingering (TOMM) (N = 99). Then, for comparison purposes, negative response bias was examined using below-cutoff performance on the Medical Symptom Validity Test (MSVT; N = 95). Study participants included primarily middle-aged outpatients at a Veterans Affairs medical center. Findings from this retrospective analysis showed that, regardless of whether the TOMM or the MSVT was used as the negative response bias criterion, of all the DS variables examined, DS Sequencing Total showed the best classification accuracy. Yet, due to its relatively low positive and negative predictive power, DS Sequencing Total is not recommended for use in isolation to identify negative response bias.


Subject(s)
Malingering/diagnosis , Neuropsychological Tests , Veterans/psychology , Wechsler Scales , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Retrospective Studies , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...