Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Meas Eval Couns Dev ; 56(3): 254-264, 2023.
Article in English | MEDLINE | ID: mdl-37744422

ABSTRACT

We investigated the validity and screening effectiveness of the PHQ-2 and PHQ-9 scores in 229 college students in a cross-sectional design. PHQ associations with Minnesota Multiphasic Personality Inventory-3 internalizing scales suggest PHQ scores are effective screening tools for college students and may aid in effective triage and service needs.

2.
J Clin Psychol ; 79(2): 374-390, 2023 02.
Article in English | MEDLINE | ID: mdl-35869855

ABSTRACT

OBJECTIVE: Attaining competence in assessment is a necessary step in graduate training and has been defined to include multiple domains of training relevant to this attainment. While important to ensure trainees meet these standards of training, it is critical to understand how and if competence shapes a trainees' professional identity, therein promoting lifelong competency. METHODS: The current study assessed currently enrolled graduate trainees' knowledge and perception of their capabilities related to assessment to determine if self-reported and performance-based competence would incrementally predict their intention to use assessment in their future above basic training characteristics and intended career interests. RESULTS: Self-reported competence, but not performance-based competence, played an incremental role in trainees' intention to use assessments in their careers. Multiple graduate training characteristics and practice experiences were insignificant predictors after accounting for other relative predictors (i.e., intended career settings, integrated reports). CONCLUSION: Findings are discussed about the critical importance of incorporating a hybrid competency-capability assessment training framework to further emphasize the role of trainee self-efficacy in hopes of promoting lifelong competence in their continued use of assessments.


Subject(s)
Intention , Physicians , Humans , Self Report , Clinical Competence , Students
3.
Clin Neuropsychol ; 37(6): 1154-1172, 2023 08.
Article in English | MEDLINE | ID: mdl-35980751

ABSTRACT

Objective: To investigate the utility of the validity scales of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) for detecting feigned Attention-Deficit Hyperreactivity Disorder (ADHD), we utilized a simulation design. Method: We examined group differences across the Restructured Clinical (RC) and validity scales as well as the classification ability of the validity scales across three cut scores. Analyses were conducted across five simulation groups (N = 177) and a standard instruction group (N = 32). Results: Across most of the RC and validity scales, those feigning ADHD produced significantly higher scores than the standard instruction group, but generally no significant differences between the feigning groups were demonstrated. The most promising scales for detecting feigned ADHD were F-r, Fp-r, and Fs at cut scores in the 70 T to 80 T range, respectively. Conclusions: Results support the use of the MMPI-2-RF in ADHD evaluations with scores on F-r, Fs, and Fp-r being particularly useful in detecting feigned ADHD in college students. However, there was no evidence to support the feigning of distinct ADHD symptoms presentations.


Subject(s)
Attention Deficit Disorder with Hyperactivity , MMPI , Humans , Attention Deficit Disorder with Hyperactivity/diagnosis , Malingering/diagnosis , Neuropsychological Tests , Students , Reproducibility of Results
4.
Clin Neuropsychol ; 36(8): 2361-2369, 2022 11.
Article in English | MEDLINE | ID: mdl-34470583

ABSTRACT

OBJECTIVE: We examined the utility of the Minnesota Multiphasic Personality Inventory-3 (MMPI-3) to detect feigned over-reporting using a symptom-based coaching simulation design across a control group and three diagnostic conditions: posttraumatic stress disorder (PTSD), minor traumatic brain injury (mTBI), and comorbid PTSD and mTBI. METHOD: Participants were310 college students who wererandomly assigned to one of the four conditions. For participants in the feigning conditions, they were provided with a descriptionof their respective disorder condition throughout the duration of the session and asked to feign according to their condition while completing the MMPI-3. RESULTS: MMPI-3 over-reporting scales perform well at classifying feigning. There is low sensitivity, high specificity, and effect magnitudes are medium to large range (1.12 - 2.47). There are no differences in scales assessing over-reporting between diagnostic conditions with dissimilar symptoms. CONCLUSIONS: Findings provide initial support for the use of the MMPI-3 overreporting scales for detecting feigned PTSD, mTBI, and comorbid PTSD and mTBI. Further, individuals feigning different disorders, namely PTSD, mTBI, and comorbid PTSD and mTBI, feign predominantly general psychopathological symptoms, making Fp the strongest scale in terms of detecting these feigned disorders. Future research will benefit from establishing relevant diagnostic comparison groups to contrast with this study and utilizing known-group designs withboth PVT and SVT administration.


Subject(s)
MMPI , Stress Disorders, Post-Traumatic , Humans , Malingering/diagnosis , Malingering/epidemiology , Reproducibility of Results , Neuropsychological Tests , Stress Disorders, Post-Traumatic/diagnosis , Stress Disorders, Post-Traumatic/epidemiology
5.
Mil Psychol ; 34(4): 484-493, 2022.
Article in English | MEDLINE | ID: mdl-38536284

ABSTRACT

This study evaluated the Personality Assessment Inventory's (PAI) symptom validity-based over-reporting scales with concurrently administered performance validity testing in a sample of active-duty military personnel seen within a neuropsychology clinic. We utilize two measures of performance validity to identify problematic performance validity (pass all/fail any) in 468 participants. Scale means, sensitivity, specificity, predictive value, and risk ratios were contrasted across symptom validity-based over-reporting scales. Results indicate that the Negative Impression Management (NIM), Malingering Index (MAL), and Multiscale Feigning Index (MFI) scales are the best at classifying failed performance validity testing with medium to large effects (d = .61-.73). In general, these scales demonstrated high specificity and low sensitivity. Roger's Discriminant Function (RDF) had negligible group differences and poor classification. The Feigned Adult ADHD index (FAA) performed inconsistently. This study provides support for the use of several PAI over-reporting scales at detecting probable patterns of performance-based invalid responses within a military sample. Military clinicians using NIM, MAL, or MFI are confident that those who elevate these scales at recommended cut scores are likely to fail concurrent performance validity testing. Use of the Feigned Adult FAA and RDF scales is discouraged due to their poor or mixed performance.

6.
Suicide Life Threat Behav ; 51(1): 148-161, 2021 02.
Article in English | MEDLINE | ID: mdl-33624879

ABSTRACT

OBJECTIVE: Although causal inference is often straightforward in experimental contexts, few research questions in suicide are amenable to experimental manipulation and randomized control. Instead, suicide prevention specialists must rely on observational data and statistical control of confounding variables to make effective causal inferences. We provide a brief summary of recent covariate practice and a tutorial on casual inference tools for covariate selection in suicide research. METHOD: We provide an introduction to modern causal inference tools, suggestions for statistical control selection, and demonstrations using simulated data. RESULTS: Statistical controls are often mistakenly selected due to their significant correlation with other study variables, their consistency with previous research, or no explicit reason at all. We clarify what it means to control for a variable and when controlling for the wrong covariates systematically distorts results. We describe directed acyclic graphs (DAGs) and tools for identifying the right choice of covariates. Finally, we provide four best practices for integrating causal inference tools in future studies. CONCLUSION: The use of causal model tools, such as DAGs, allows researchers to carefully and thoughtfully select statistical controls and avoid presenting distorted findings; however, limitations of this approach are discussed.


Subject(s)
Models, Theoretical , Suicide Prevention , Causality , Confounding Factors, Epidemiologic , Data Interpretation, Statistical , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...