Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 21910, 2023 Dec 11.
Article in English | MEDLINE | ID: mdl-38081832
2.
J Appl Psychol ; 108(9): 1515-1539, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37023297

ABSTRACT

The situation plays an important role in leadership, yet there exists no comprehensive, well-accepted, and empirically validated framework for modeling leadership situations. This research used situation ratings and narratives from 1,159 leaders to empirically develop a taxonomy of leadership situations. Natural language processing techniques were used to generate psychological situation characteristics that were then rated by leaders. Factor analyses of leader ratings resulted in a taxonomy of psychological leadership situation characteristics with six dimensions (Positive Uniqueness, Importance, Negativity, Scope, Typicality, and Ease). Topic modeling of leader narratives provided a preliminary accompanying typology of structural leadership situation cue combinations (Market/Business Needs, Barriers to Effectiveness, Interpersonal Resources, Deviations/Changes, Team Objectives, and Logistics). To facilitate the measurement of the perceptions of situations, we developed a 27-item measure of the six dimensions of psychological leadership situation characteristics: the Leadership Situation Questionnaire (LSQ). We used the LSQ to conduct initial tests of the nomological network of psychological leadership situation characteristics by assessing their relationships with leader personality, leader behavior, outcomes of leadership situations, and structural leadership situation cue combinations. The psychological leadership situation characteristics taxonomy and the resulting measure (the LSQ) provide an organizing framework for existing leadership research, lay a foundation for future research on situation-related leadership hypotheses, and offer important practical implications in areas such as leader assessment and development. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Leadership , Personality , Humans , Factor Analysis, Statistical , Commerce
3.
J Appl Psychol ; 98(1): 114-33, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23244226

ABSTRACT

Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed.


Subject(s)
Employee Performance Appraisal/methods , Observer Variation , Professional Competence/statistics & numerical data , Reproducibility of Results , Research Design , Analysis of Variance , Career Mobility , Employee Performance Appraisal/statistics & numerical data , Female , Government Agencies , Humans , Male , Motivation
4.
J Appl Psychol ; 96(6): 1167-94, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21744941

ABSTRACT

A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, -.19 for turnover intentions, and -.15 for actual turnover. Several factors appeared to moderate interest-criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought.


Subject(s)
Career Choice , Job Satisfaction , Personnel Selection/methods , Personnel Selection/statistics & numerical data , Personnel Turnover/statistics & numerical data , Vocational Guidance/methods , Employee Performance Appraisal , Humans , Occupations/statistics & numerical data , Psychometrics , Reproducibility of Results
5.
J Appl Psychol ; 96(1): 13-33, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20919794

ABSTRACT

Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria­including measures of job knowledge, job performance, and continuance intentions­with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection.


Subject(s)
Career Choice , Personnel Selection/methods , Psychological Tests/standards , Adolescent , Adult , Aptitude Tests/standards , Employee Performance Appraisal , Female , Humans , Job Satisfaction , Male , Military Personnel/psychology , Personality Assessment , Personnel Turnover , Professional Competence , Reproducibility of Results , United States , Young Adult
6.
J Appl Psychol ; 93(5): 959-81, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18808219

ABSTRACT

Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.


Subject(s)
Organizational Culture , Psychometrics , Humans , Models, Psychological , Observer Variation , Reproducibility of Results
7.
J Appl Psychol ; 90(2): 323-34, 2005 Mar.
Article in English | MEDLINE | ID: mdl-15769241

ABSTRACT

The authors modeled sources of error variance in job specification ratings collected from 3 levels of raters across 5 organizations (N=381). Variance components models were used to estimate the variance in ratings attributable to true score (variance between knowledge, skills, abilities, and other characteristics [KSAOs]) and error (KSAO-by-rater and residual variance). Subsequent models partitioned error variance into components related to the organization, position level, and demographic characteristics of the raters. Analyses revealed that the differential ordering of KSAOs by raters was not a function of these characteristics but rather was due to unexplained rating differences among the raters. The implications of these results for job specification and validity transportability are discussed.


Subject(s)
Task Performance and Analysis , Adult , Analysis of Variance , Female , Humans , Likelihood Functions , Male , Models, Statistical , Observer Variation , Psychometrics/methods , Reproducibility of Results , United States
8.
J Appl Psychol ; 87(3): 506-16, 2002 Jun.
Article in English | MEDLINE | ID: mdl-12090608

ABSTRACT

Although hundreds of studies have found a positive relationship between self-efficacy and performance, several studies have found a negative relationship when the analysis is done across time (repeated measures) rather than across individuals. W. T. Powers (1991) predicted this negative relationship based on perceptual control theory. Here, 2 studies are presented to (a) confirm the causal role of self-efficacy and (b) substantiate the explanation. In Study 1, self-efficacy was manipulated for 43 of 87 undergraduates on an analytic game. The manipulation was negatively related to performance on the next trial. In Study 2, 104 undergraduates played the analytic game and reported self-efficacy between each game and confidence in the degree to which they had assessed previous feedback. As expected, self-efficacy led to overconfidence and hence increased the likelihood of committing logic errors during the game.


Subject(s)
Models, Statistical , Motivation , Self Efficacy , Workplace/psychology , Humans , Random Allocation
SELECTION OF CITATIONS
SEARCH DETAIL
...