Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Clin Neuropsychol ; 38(3): 738-762, 2024 04.
Article in English | MEDLINE | ID: mdl-37615421

ABSTRACT

Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.


Subject(s)
Brain Concussion , Mentoring , Humans , Brain Concussion/complications , Brain Concussion/diagnosis , Neuropsychological Tests , Sensitivity and Specificity , Malingering/diagnosis , Reproducibility of Results
2.
Behav Res Methods ; 56(4): 3242-3258, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38129734

ABSTRACT

It is common for some participants in self-report surveys to be careless, inattentive, or lacking in effort. Data quality can be severely compromised by responses that are not based on item content (non-content-based [nCB] responses), leading to strong biases in the results of data analysis and misinterpretation of individual scores. In this study, we propose a specification of factor mixture analysis (FMA) to detect nCB responses. We investigated the usefulness and effectiveness of the FMA model in detecting nCB responses using both simulated data (Study 1) and real data (Study 2). In the first study, FMA showed reasonably robust sensitivity (.60 to .86) and excellent specificity (.96 to .99) on mixed-worded scales, suggesting that FMA had superior properties as a screening tool under different sample conditions. However, FMA performance was poor on scales composed of only positive items because of the difficulty in distinguishing acquiescent patterns from valid responses representing high levels of the trait. In Study 2 (real data), FMA detected a minority of cases (6.5%) with highly anomalous response patterns. Removing these cases resulted in a large increase in the fit of the unidimensional model and a substantial reduction in spurious multidimensionality.


Subject(s)
Self Report , Humans , Factor Analysis, Statistical , Surveys and Questionnaires , Data Interpretation, Statistical , Models, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...