Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
J Pers Assess ; : 1-13, 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38501713

ABSTRACT

Self-report assessments are the standard for personality measurement, but motivated respondents are able to manipulate or fake their responses to typical Likert scale self-report. Although progress has been made in research seeking to reduce faking, most of it has focused on normative personality traits such as those measured by the five factor model. The measurement of socially aversive personality (e.g., the Dark Triad) is less well-researched. The negative aspects of socially aversive traits increase the opportunity and motivation of respondents to fake typical single-stimulus self-report assessments underscoring the need for faking resistant response formats. A possible way to reduce faking that has been explored in basic personality research is the use of the forced-choice response format. This study applied this method to socially aversive traits and illustrated best practices to create new multidimensional forced-choice and single-stimulus measures of socially aversive personality traits. Results indicated that participants were able to artificially alter their scores when asked to respond like an ideal job applicant, and counter to expectations, the forced-choice format did not decrease faking. Our results indicate that even when best practices are followed, forced-choice format is not a panacea for respondent faking.

2.
Annu Rev Psychol ; 74: 577-596, 2023 01 18.
Article in English | MEDLINE | ID: mdl-35973734

ABSTRACT

Surveys administered online have several benefits, but they are particularly prone to careless responding, which occurs when respondents fail to read item content or give sufficient attention, resulting in raw data that may not accurately reflect respondents' true levels of the constructs being measured. Careless responding can lead to various psychometric issues, potentially impacting any area of psychology that uses self-reported surveys and assessments. This review synthesizes the careless responding literature to provide a comprehensive understanding of careless responding and ways to prevent, identify, report, and clean careless responding from data sets. Further, we include recommendations for different levels of screening for careless responses. Finally, we highlight some of the most promising areas for future work on careless responding.


Subject(s)
Surveys and Questionnaires , Humans , Self Report , Psychometrics/methods
3.
Educ Psychol Meas ; 82(6): 1107-1129, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36325125

ABSTRACT

The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence-either incrementally arranged (e.g., strongly disagree to strongly agree) or decrementally arranged (e.g., strongly agree to strongly disagree). The present study added two additional experimental conditions-randomly incremental or decremental and completely randomized. All items were presented in an item-by-item format. We also extended previous studies by including an examination of response option order effects on: careless responding, correlations between focal predictors and criteria, and participant reactions, all the while controlling for false discovery rate and focusing on the size of effects. In a sample of 1,198 university students, we found little to no response option order effects on a recognized personality assessment vis-à-vis measurement equivalence, scale mean differences, item-level distributions, or participant reactions. However, the completely randomized response option order condition differed on several careless responding indices suggesting avenues for future research.

4.
J Psychol ; 149(7): 684-710, 2015.
Article in English | MEDLINE | ID: mdl-25356746

ABSTRACT

Although employee (subjective) perceived overqualification (POQ) has recently been explored as a meaningful organizational construct, further work is needed to fully understand it. We extend the theoretical psychological underpinnings of employee POQ and examine both its determinants and outcomes based on established and newly proposed theoretical developments. Four-hundred and fifteen employees completed an online questionnaire and 208 of their supervisors completed corresponding surveys about the employees' withdrawal behaviors and job-related attitudes, in order to explore potential predictors and outcomes of subjectively experienced POQ. Among the predictors, work conditions (uniform requirements and repetitive tasks) were most strongly associated with POQ. In terms of individual differences, narcissism predicted higher POQ while general mental ability only did when holding other variables constant. In addition, among the outcomes, higher POQ was related to lower job satisfaction and organizational commitment, but was not related to withdrawal behaviors such as truancy, absenteeism, and turnover intentions.


Subject(s)
Aptitude , Employment/psychology , Job Satisfaction , Narcissism , Social Perception , Adult , Female , Humans , Male , Young Adult
5.
Cyberpsychol Behav Soc Netw ; 16(11): 800-5, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23790360

ABSTRACT

Job applicants and incumbents often use social media for personal communications allowing for direct observation of their social communications "unfiltered" for employer consumption. As such, these data offer a glimpse of employees in settings free from the impression management pressures present during evaluations conducted for applicant screening and research purposes. This study investigated whether job applicants' (N=175) personality characteristics are reflected in the content of their social media postings. Participant self-reported social media content related to (a) photos and text-based references to alcohol and drug use and (b) criticisms of superiors and peers (so-called "badmouthing" behavior) were compared to traditional personality assessments. Results indicated that extraverted candidates were prone to postings related to alcohol and drugs. Those low in agreeableness were particularly likely to engage in online badmouthing behaviors. Evidence concerning the relationships between conscientiousness and the outcomes of interest was mixed.


Subject(s)
Employment , Personality , Social Behavior , Social Media , Adolescent , Adult , Behavior , Female , Humans , Male , Personality Assessment , Personality Inventory
6.
J Appl Psychol ; 97(5): 1016-31, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22468848

ABSTRACT

The efficacy of tests of differential item functioning (measurement invariance) has been well established. It is clear that when properly implemented, these tests can successfully identify differentially functioning (DF) items when they exist. However, an assumption of these analyses is that the metric for different groups is linked using anchor items that are invariant. In practice, however, it is impossible to be certain which items are DF and which are invariant. This problem of anchor items, or referent indicators, has long plagued invariance research, and a multitude of suggested approaches have been put forth. Unfortunately, the relative efficacy of these approaches has not been tested. This study compares 11 variations on 5 qualitatively different approaches from recent literature for selecting optimal anchor items. A large-scale simulation study indicates that for nearly all conditions, an easily implemented 2-stage procedure recently put forth by Lopez Rivas, Stark, and Chernyshenko (2009) provided optimal power while maintaining nominal Type I error. With this approach, appropriate anchor items can be easily and quickly located, resulting in more efficacious invariance tests. Recommendations for invariance testing are illustrated using a pedagogical example of employee responses to an organizational culture measure.


Subject(s)
Models, Statistical , Organizational Culture , Analysis of Variance , Focus Groups , Humans , Psychology, Applied/methods
7.
Psychol Methods ; 17(3): 437-55, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22506584

ABSTRACT

When data are collected via anonymous Internet surveys, particularly under conditions of obligatory participation (such as with student samples), data quality can be a concern. However, little guidance exists in the published literature regarding techniques for detecting careless responses. Previously several potential approaches have been suggested for identifying careless respondents via indices computed from the data, yet almost no prior work has examined the relationships among these indicators or the types of data patterns identified by each. In 2 studies, we examined several methods for identifying careless responses, including (a) special items designed to detect careless response, (b) response consistency indices formed from responses to typical survey items, (c) multivariate outlier analysis, (d) response time, and (e) self-reported diligence. Results indicated that there are two distinct patterns of careless response (random and nonrandom) and that different indices are needed to identify these different response patterns. We also found that approximately 10%-12% of undergraduates completing a lengthy survey for course credit were identified as careless responders. In Study 2, we simulated data with known random response patterns to determine the efficacy of several indicators of careless response. We found that the nature of the data strongly influenced the efficacy of the indices to identify careless responses. Recommendations include using identified rather than anonymous responses, incorporating instructed response items before data collection, as well as computing consistency indices and multivariate outlier analysis to ensure high-quality data.


Subject(s)
Research Design , Surveys and Questionnaires , Data Collection , Humans , Internet , Logistic Models , Reaction Time , Self Report
8.
Behav Res Methods ; 43(3): 800-13, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21437749

ABSTRACT

Online contract labor portals (i.e., crowdsourcing) have recently emerged as attractive alternatives to university participant pools for the purposes of collecting survey data for behavioral research. However, prior research has not provided a thorough examination of crowdsourced data for organizational psychology research. We found that, as compared with a traditional university participant pool, crowdsourcing respondents were older, were more ethnically diverse, and had more work experience. Additionally, the reliability of the data from the crowdsourcing sample was as good as or better than the corresponding university sample. Moreover, measurement invariance generally held across these groups. We conclude that the use of these labor portals is an efficient and appropriate alternative to a university participant pool, despite small differences in personality and socially desirable responding across the samples. The risks and advantages of crowdsourcing are outlined, and an overview of practical and ethical guidelines is provided.


Subject(s)
Data Collection/methods , Patient Selection , Research Design , Research Subjects , Behavioral Research , Humans , Personality
9.
J Appl Psychol ; 95(4): 728-43, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20604592

ABSTRACT

Much progress has been made in the past 2 decades with respect to methods of identifying measurement invariance or a lack thereof. Until now, the focus of these efforts has been to establish criteria for statistical significance in items and scales that function differently across samples. The power associated with tests of differential functioning, as with all significance tests, is affected by sample size and other considerations. Additionally, statistical significance need not imply practical importance. There is a strong need as such for meaningful effect size indicators to describe the extent to which items and scales function differently. Recently developed effect size measures show promise for providing a metric to describe the amount of differential functioning present between groups. Expanding upon recent developments, this article presents a taxonomy of potential differential functioning effect sizes; several new indices of item and scale differential functioning effect size are proposed and illustrated with 2 data samples. Software created for computing these indices and graphing item- and scale-level differential functioning is described.


Subject(s)
Data Interpretation, Statistical , Psychological Tests/statistics & numerical data , Cross-Cultural Comparison , Data Collection/standards , Data Collection/statistics & numerical data , Humans , Models, Statistical , Psychological Tests/standards , Psychometrics/standards , Psychometrics/statistics & numerical data , Reference Values , Sample Size , Software/standards
10.
J Appl Psychol ; 93(3): 568-92, 2008 May.
Article in English | MEDLINE | ID: mdl-18457487

ABSTRACT

Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing.


Subject(s)
Power, Psychological , Psychological Tests , Humans , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...