Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Appl Psychol ; 97(3): 543-9; discussion 531-6, 537-42, 2012 May.
Article in English | MEDLINE | ID: mdl-22582729

ABSTRACT

We clear up a number of misconceptions from the critiques of our meta-analysis (Van Iddekinge, Roth, Raymark, & Odle-Dusseau, 2012). We reiterate that our research question focused on the criterion-related validity of integrity tests for predicting individual work behavior and that our inclusion criteria flowed from this question. We also reviewed the primary studies we could access from Ones, Viswesvaran, and Schmidt's (1993) meta-analysis of integrity tests and found that only about 30% of the studies met our inclusion criteria. Further, analyses of some of the types of studies we had to exclude revealed potentially inflated validity estimates (e.g., corrected validities as high as .80 for polygraph studies). We also discuss our experience trying to obtain primary studies and other information from authors of Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012). In addition, we address concerns raised about certain decisions we made and values we used, and we demonstrate how such concerns would have little or no effect on our results or conclusions. Finally, we discuss some other misconceptions about our meta-analysis, as well as some divergent views about the integrity test literature in general. Overall, we stand by our research question, methods, and results, which suggest that the validity of integrity tests for criteria such as job performance and counterproductive work behavior is weaker than the authors of the critiques appear to believe.


Subject(s)
Ethics , Meta-Analysis as Topic , Personality Assessment/standards , Personnel Selection/methods , Research Design/standards , Humans
2.
J Appl Psychol ; 97(3): 499-530, 2012 May.
Article in English | MEDLINE | ID: mdl-21319880

ABSTRACT

Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we meta-analyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. Overall mean observed validity estimates and validity estimates corrected for unreliability in the criterion (respectively) were .12 and .15 for job performance, .13 and .16 for training performance, .26 and .32 for counterproductive work behavior, and .07 and .09 for turnover. Although data on restriction of range were sparse, illustrative corrections for indirect range restriction did increase validities slightly (e.g., from .15 to .18 for job performance). Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15).


Subject(s)
Ethics , Personality Assessment/standards , Personnel Selection/methods , Psychology, Industrial/instrumentation , Psychometrics/standards , Humans , Personnel Selection/standards , Psychology, Industrial/standards , Reproducibility of Results
3.
J Appl Psychol ; 90(3): 536-52, 2005 May.
Article in English | MEDLINE | ID: mdl-15910148

ABSTRACT

The authors evaluated the extent to which a personality-based structured interview was susceptible to response inflation. Interview questions were developed to measure facets of agreeableness, conscientiousness, and emotional stability. Interviewers administered mock interviews to participants instructed to respond honestly or like a job applicant. Interviewees completed scales of the same 3 facets from the NEO Personality Inventory, under the same honest and applicant-like instructions. Interviewers also evaluated interviewee personality with the NEO. Multitrait-multimethod analysis and confirmatory factor analysis provided some evidence for the construct-related validity of the personality interviews. As for response inflation, analyses revealed that the scores from the applicant-like condition were significantly more elevated (relative to honest condition scores) for self-report personality ratings than for interviewer personality ratings. In addition, instructions to respond like an applicant appeared to have a detrimental effect on the structure of the self-report and interview ratings, but not interviewer NEO ratings.


Subject(s)
Deception , Interview, Psychological , Job Application , Personality Assessment/statistics & numerical data , Personnel Selection , Adult , Bias , Female , Humans , Male , Personality Inventory/statistics & numerical data , Psychometrics/statistics & numerical data , Reproducibility of Results
4.
J Appl Psychol ; 90(2): 323-34, 2005 Mar.
Article in English | MEDLINE | ID: mdl-15769241

ABSTRACT

The authors modeled sources of error variance in job specification ratings collected from 3 levels of raters across 5 organizations (N=381). Variance components models were used to estimate the variance in ratings attributable to true score (variance between knowledge, skills, abilities, and other characteristics [KSAOs]) and error (KSAO-by-rater and residual variance). Subsequent models partitioned error variance into components related to the organization, position level, and demographic characteristics of the raters. Analyses revealed that the differential ordering of KSAOs by raters was not a function of these characteristics but rather was due to unexplained rating differences among the raters. The implications of these results for job specification and validity transportability are discussed.


Subject(s)
Task Performance and Analysis , Adult , Analysis of Variance , Female , Humans , Likelihood Functions , Male , Models, Statistical , Observer Variation , Psychometrics/methods , Reproducibility of Results , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...