Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Appl Psychol ; 109(6): 921-948, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38270989

ABSTRACT

Interviews are one of the most widely used selection methods, but their reliability and validity can vary substantially. Further, using human evaluators to rate an interview can be expensive and time consuming. Interview scoring models have been proposed as a mechanism for reliably, accurately, and efficiently scoring video-based interviews. Yet, there is a lack of clarity and consensus around their psychometric characteristics, primarily driven by a dearth of published empirical research. The goal of this study was to examine the psychometric properties of automated video interview competency assessments (AVI-CAs), which were designed to be highly generalizable (i.e., apply across job roles and organizations). The AVI-CAs developed demonstrated high levels of convergent validity (average r value of .66), moderate discriminant relationships (average r value of .58), good test-retest reliability (average r value of .72), and minimal levels of subgroup differences (Cohen's ds ≥ -.14). Further, criterion-related validity (uncorrected sample-weighted r¯ = .24) was demonstrated by applying these AVI-CAs to five organizational samples. Strengths, weaknesses, and future directions for building interview scoring models are also discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Psychometrics , Humans , Psychometrics/standards , Psychometrics/instrumentation , Reproducibility of Results , Adult , Video Recording , Interviews as Topic , Personnel Selection/methods , Personnel Selection/standards , Professional Competence , Male , Female
2.
J Appl Psychol ; 108(9): 1425-1444, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37036690

ABSTRACT

The diversity-validity dilemma is one of the enduring challenges in personnel selection. Technological advances and new techniques for analyzing data within the fields of machine learning and industrial organizational psychology, however, are opening up innovative ways of addressing this dilemma. Given these rapid advances, we first present a framework unifying analytical methods commonly used in these two fields to reduce group differences. We then propose and demonstrate the effectiveness of two approaches for reducing group differences while maintaining validity, which are highly applicable to numerous big data scenarios: iterative predictor removal and multipenalty optimization. Iterative predictor removal is a technique where predictors are removed from the data set if they simultaneously contribute to higher group differences and lower predictive validity. Multipenalty optimization is a new analytical technique that models the diversity-validity trade-off by adding a group difference penalty to the model optimization. Both techniques were tested on a field sample of asynchronous video interviews. Although both techniques effectively decreased group differences while maintaining predictive validity, multipenalty optimization outperformed iterative predictor removal. Strengths and weaknesses of these two analytical techniques are also discussed along with future research directions. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Big Data , Personnel Selection , Humans , Personnel Selection/methods , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...