Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Nat Hum Behav ; 8(2): 219-227, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38233604

ABSTRACT

Corporate social responsibility (CSR) research can help to address some of society's grand challenges (for example, climate change, energy sustainability and social inequality). Historically, CSR research has focused on organizational-level factors that address environmental and social issues and the firm's resulting financial performance, with much less focus on individual-level factors. In response to research calls to consider the individual level of analysis, we provide a narrative review to improve our understanding of the interconnections between CSR and individual behaviour. We organize existing research around three individual-level categories: CSR perceptions, CSR attitudes and CSR behaviours. We summarize research elucidating how perceptions and attitudes influence behaviours and how organization and higher-level CSR context and individual-level CSR readiness moderate perceptions-behaviours and attitudes-behaviours relationships. We offer a conceptual model that organizes the diverse, conflicting and multidisciplinary research on the CSR-individual behaviour link and that can be used to guide future research.


Subject(s)
Organizations , Social Responsibility , Humans , Attitude , Models, Theoretical , Socioeconomic Factors
2.
J Appl Psychol ; 109(3): 402-414, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37824269

ABSTRACT

Predictive bias (i.e., differential prediction) means that regression equations predicting performance differ across groups based on protected status (e.g., ethnicity, sexual orientation, sexual identity, pregnancy, disability, and religion). Thus, making prescreening, admissions, and selection decisions when predictive bias exists violates principles of fairness based on equal treatment and opportunity. First, we conducted a two-part study showing that different types of predictive bias exist. Specifically, we conducted a Monte Carlo simulation showing that out-of-sample predictions provide a more precise understanding of the nature of predictive bias-whether it is based on intercept and/or slope differences across groups. Then, we conducted a college admissions study based on 29,734 Black and 304,372 White students, and 35,681 Latinx and 308,818 White students and provided evidence about the existence of both intercept- and slope-based predictive bias. Third, we discuss the nature and different types of predictive bias and offer analytical work to explain why each type exists, thereby providing insights into the causes of different types of predictive bias. We also map the statistical causes of predictive bias onto the existing literature on likely underlying psychological and contextual mechanisms. Overall, we hope our article will help reorient future predictive bias research from whether it exists to the why of different types of predictive bias. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Ethnicity , Humans , Male , Female , Ethnicity/psychology , Computer Simulation , Bias
3.
J Occup Health Psychol ; 26(6): 564-581, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34292017

ABSTRACT

A challenge for leadership and health/well-being research and applications relying on web-based data collection is false identities-cases where participants are not members of the targeted population. To address this challenge, we investigated the effectiveness of a new approach consisting of using internet protocol (IP) address analysis to enhance the validity of web-based research involving constructs relevant in leadership and health/well-being research (e.g., leader-member exchange [LMX], physical [health] symptoms, job satisfaction, workplace stressors, and task performance). Specifically, we used study participants' IP addresses to gather information on their IP threat scores and internet service providers (ISPs). We then used IP threat scores and ISPs to distinguish between two types of respondents: (a) targeted and (b) nontargeted. Results of an empirical study involving nearly 1,000 participants showed that using information obtained from IP addresses to distinguish targeted from nontargeted participants resulted in data with fewer missed instructed-response items, higher within-person reliability, and a higher completion rate of open-ended questions. Comparing the entire sample against targeted participants showed different mean scores, factor structures, scale reliability estimates, and estimated size of substantive relationships among constructs. Differences in scale reliability and construct mean scores remained even after implementing existing procedures typically used to compare web-based and nonweb-based respondents, providing evidence that our proposed approach offers clear benefits not found in data-cleaning methodologies currently in use. Finally, we offer best-practice recommendations in the form of a decision-making tree for improving the validity of future web-based surveys and research in leadership and health/well-being and other domains. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Job Satisfaction , Leadership , Humans , Internet , Reproducibility of Results , Surveys and Questionnaires
4.
J Appl Psychol ; 106(3): 476-488, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33871272

ABSTRACT

Van Iddekinge et al. (2018)'s meta-analysis revealed that ability and motivation have mostly an additive rather than an interactive effect on performance. One of the methods they used to assess the ability × motivation interaction was moderated multiple regression (MMR). Vancouver et al. (2021) presented conceptual arguments that ability and motivation should interact to predict performance, as well as analytical and empirical arguments against the use of MMR to assess interaction effects. We describe problems with these arguments and show conceptually and empirically that MMR (and the ΔR and ΔR2 it yields) is an appropriate and effective method for assessing both the statistical significance and magnitude of interaction effects. Nevertheless, we also applied the alternative approach Vancouver et al. recommended to test for interactions to primary data sets (k = 69) from Van Iddekinge et al. These new results showed that the ability × motivation interaction was not significant in 90% of the analyses, which corroborated Van Iddekinge et al.'s original conclusion that the interaction rarely increments the prediction of performance beyond the additive effects of ability and motivation. In short, Van Iddekinge et al.'s conclusions remain unchanged and, given the conceptual and empirical problems we identified, we cannot endorse Vancouver et al.'s recommendation to change how researchers test interactions. We conclude by offering suggestions for how to assess and interpret interactions in future research. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Motivation , Research Design , Humans
6.
Bus Horiz ; 64(1): 149-160, 2021.
Article in English | MEDLINE | ID: mdl-32981944

ABSTRACT

Many organizations are curtailing or even abandoning performance management because of difficulties measuring performance and disruptions in performance-based pay due to the COVID-19 crisis. Contrary to this growing and troubling trend, we argue that it is especially important during the crisis to not only continue but also strengthen performance management to communicate a firm's strategic direction, collect valuable business data, provide critical feedback to individuals and workgroups, protect organizations from legal risks, and retain top talent. To do so, we offer a solution to overcome the challenges associated with measuring performance during a crisis. Specifically, we extend and expand upon the well-established Net Promoter Score measure in marketing and introduce the Performance Promoter Score (PPS) to measure performance. We offer evidence-based recommendations for collecting PPS information for individuals, workgroups, and other collectives, computing a Net Performance Promoter Score (NPPS); using multiple sources of performance data, and using PPS for administrative and developmental purposes as well as to provide more frequent performance check-ins. PPS is a convenient, practical, relevant, and useful performance measure during a crisis such as the COVID-19 pandemic, but it is also an innovation that will be useful long after the pandemic is over.

7.
Psychometrika ; 84(1): 285-309, 2019 03.
Article in English | MEDLINE | ID: mdl-30671788

ABSTRACT

The existence of differences in prediction systems involving test scores across demographic groups continues to be a thorny and unresolved scientific, professional, and societal concern. Our case study uses a two-stage least squares (2SLS) estimator to jointly assess measurement invariance and prediction invariance in high-stakes testing. So, we examined differences across groups based on latent as opposed to observed scores with data for 176 colleges and universities from The College Board. Results showed that evidence regarding measurement invariance was rejected for the SAT mathematics (SAT-M) subtest at the 0.01 level for 74.5% and 29.9% of cohorts for Black versus White and Hispanic versus White comparisons, respectively. Also, on average, Black students with the same standing on a common factor had observed SAT-M scores that were nearly a third of a standard deviation lower than for comparable Whites. We also found evidence that group differences in SAT-M measurement intercepts may partly explain the well-known finding of observed differences in prediction intercepts. Additionally, results provided evidence that nearly a quarter of the statistically significant observed intercept differences were not statistically significant at the 0.05 level once predictor measurement error was accounted for using the 2SLS procedure. Our joint measurement and prediction invariance approach based on latent scores opens the door to a new high-stakes testing research agenda whose goal is to not simply assess whether observed group-based differences exist and the size and direction of such differences. Rather, the goal of this research agenda is to assess the causal chain starting with underlying theoretical mechanisms (e.g., contextual factors, differences in latent predictor scores) that affect the size and direction of any observed differences.


Subject(s)
Educational Measurement/methods , Least-Squares Analysis , Ethnicity , Factor Analysis, Statistical , Humans , Information Storage and Retrieval , Mathematical Concepts , Psychometrics/methods , Racial Groups , Universities
8.
J Appl Psychol ; 103(12): 1283-1306, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30024197

ABSTRACT

We examined the gender productivity gap in science, technology, engineering, mathematics, and other scientific fields (i.e., applied psychology, mathematical psychology), specifically among star performers. Study 1 included 3,853 researchers who published 3,161 articles in mathematics. Study 2 included 45,007 researchers who published 7,746 articles in genetics. Study 3 included 4,081 researchers who published 2,807 articles in applied psychology and 6,337 researchers who published 3,796 articles in mathematical psychology. Results showed that (a) the power law with exponential cutoff is the best-fitting distribution of research productivity across fields and gender groups and (b) there is a considerable gender productivity gap among stars in favor of men across fields. Specifically, the underrepresentation of women is more extreme as we consider more elite ranges of performance (i.e., top 10%, 5%, and 1% of performers). Conceptually, results suggest that individuals vary in research productivity predominantly because of the generative mechanism of incremental differentiation, which is the mechanism that produces power laws with exponential cutoffs. Also, results suggest that incremental differentiation occurs to a greater degree among men and certain forms of discrimination may disproportionately constrain women's output increments. Practically, results suggest that women may have to accumulate more scientific knowledge, resources, and social capital to achieve the same level of increase in total outputs as their male counterparts. Finally, we offer recommendations on interventions aimed at reducing constraints for incremental differentiation among women that could be useful for narrowing the gender productivity gap specifically among star performers. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Efficiency , Engineering/statistics & numerical data , Mathematics/statistics & numerical data , Psychology/statistics & numerical data , Research/statistics & numerical data , Science/statistics & numerical data , Sexism/statistics & numerical data , Technology/statistics & numerical data , Work Performance/statistics & numerical data , Adult , Bibliometrics , Female , Humans , Male
9.
J Appl Psychol ; 102(7): 1022-1053, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28333497

ABSTRACT

We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record


Subject(s)
Occupations/statistics & numerical data , Psychology, Applied/methods , Statistical Distributions , Humans
10.
J Appl Psychol ; 102(3): 274-290, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28150981

ABSTRACT

We offer a critical review and synthesis of research methods in the first century of the Journal of Applied Psychology. We divide the chronology into 6 periods. The first emphasizes the first few issues of the journal, which, in many ways, set us on a methodological course that we sail to this day, and then takes us through the mid-1920s. The second is the period through World War II, in which we see the roots of modern methodological concepts and techniques, including a transition from a discovery orientation to a hypotheticodeductive model orientation. The third takes us through roughly 1970, a period in which many of our modern-day practices were formed, such as reliance on null hypothesis significance testing. The fourth, from 1970 through 1989, sees an emphasis on the development of measures of critical constructs. The fifth takes us into the present, which is marked by greater plurality regarding data-analytic approaches. Finally, we offer a glimpse of possible and, from our perspective, desirable futures regarding research methods. Specifically, we highlight the need to conduct replications; study the exceptional and not just the average; improve the quality of the review process, particularly regarding methodological issues; emphasize design and measurement issues; and build and test more specific theories. (PsycINFO Database Record


Subject(s)
Periodicals as Topic , Psychology, Applied/methods , History, 20th Century , History, 21st Century , Humans , Periodicals as Topic/history , Psychology, Applied/history
11.
J Appl Psychol ; 100(2): 431-49, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25314367

ABSTRACT

Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions.


Subject(s)
Behavioral Research/statistics & numerical data , Benchmarking/statistics & numerical data , Data Interpretation, Statistical , Humans
12.
J Appl Psychol ; 97(5): 951-66, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22582726

ABSTRACT

Cross-level interaction effects lie at the heart of multilevel contingency and interactionism theories. Researchers have often lamented the difficulty of finding hypothesized cross-level interactions, and to date there has been no means by which the statistical power of such tests can be evaluated. We develop such a method and report results of a large-scale simulation study, verify its accuracy, and provide evidence regarding the relative importance of factors that affect the power to detect cross-level interactions. Our results indicate that the statistical power to detect cross-level interactions is determined primarily by the magnitude of the cross-level interaction, the standard deviation of lower level slopes, and the lower and upper level sample sizes. We provide a Monte Carlo tool that enables researchers to a priori design more efficient multilevel studies and provides a means by which they can better interpret potential explanations for nonsignificant results. We conclude with recommendations for how scholars might design future multilevel studies that will lead to more accurate inferences regarding the presence of cross-level interactions.


Subject(s)
Multilevel Analysis , Psychology, Applied/methods , Humans , Interpersonal Relations , Models, Theoretical , Monte Carlo Method
13.
Psychol Methods ; 16(2): 166-78, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21517178

ABSTRACT

Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but use ANCOVA anyway (and, most likely, report misleading results); (b) attempt to employ 1 of several measurement error models with the understanding that no research has examined their relative performance and with the added practical difficulty that several of these models are not available in commonly used statistical software; or (c) not use ANCOVA at all. First, we discuss analytic evidence to explain why using ANCOVA with fallible covariates produces bias and a systematic inflation of Type I error rates that may lead to the incorrect conclusion that treatment effects exist. Second, to provide a solution for this problem, we conduct 2 Monte Carlo studies to compare 4 existing approaches for adjusting treatment effects in the presence of covariate measurement error: errors-in-variables (EIV; Warren, White, & Fuller, 1974), Lord's (1960) method, Raaijmakers and Pieters's (1987) method (R&P), and structural equation modeling methods proposed by Sörbom (1978) and Hayduk (1996). Results show that EIV models are superior in terms of parameter accuracy, statistical power, and keeping Type I error close to the nominal value. Finally, we offer a program written in R that performs all needed computations for implementing EIV models so that ANCOVA can be used to obtain accurate results even when covariates are measured with error.


Subject(s)
Analysis of Variance , Psychology/statistics & numerical data , Research Design/statistics & numerical data , Bias , Humans , Models, Statistical , Monte Carlo Method , Outcome Assessment, Health Care/statistics & numerical data
14.
J Appl Psychol ; 95(4): 648-80, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20604587

ABSTRACT

We developed a new analytic proof and conducted Monte Carlo simulations to assess the effects of methodological and statistical artifacts on the relative accuracy of intercept- and slope-based test bias assessment. The main simulation design included 3,185,000 unique combinations of a wide range of values for true intercept- and slope-based test bias, total sample size, proportion of minority group sample size to total sample size, predictor (i.e., preemployment test scores) and criterion (i.e., job performance) reliability, predictor range restriction, correlation between predictor scores and the dummy-coded grouping variable (e.g., ethnicity), and mean difference between predictor scores across groups. Results based on 15 billion 925 million individual samples of scores and more than 8 trillion 662 million individual scores raise questions about the established conclusion that test bias in preemployment testing is nonexistent and, if it exists, it only occurs regarding intercept-based differences that favor minority group members. Because of the prominence of test fairness in the popular media, legislation, and litigation, our results point to the need to revive test bias research in preemployment testing.


Subject(s)
Aptitude Tests/standards , Behavioral Research , Bias , Personnel Selection/standards , Cultural Characteristics , Humans , Minority Groups/psychology , Monte Carlo Method , Personnel Selection/methods , Prejudice , Reproducibility of Results , Sample Size , Socioeconomic Factors
15.
Annu Rev Psychol ; 60: 451-74, 2009.
Article in English | MEDLINE | ID: mdl-18976113

ABSTRACT

This article provides a review of the training and development literature since the year 2000. We review the literature focusing on the benefits of training and development for individuals and teams, organizations, and society. We adopt a multidisciplinary, multilevel, and global perspective to demonstrate that training and development activities in work organizations can produce important benefits for each of these stakeholders. We also review the literature on needs assessment and pretraining states, training design and delivery, training evaluation, and transfer of training to identify the conditions under which the benefits of training and development are maximized. Finally, we identify research gaps and offer directions for future research.


Subject(s)
Inservice Training , Staff Development , Health Services Needs and Demand , Humans , Leadership , Organizational Objectives , Outcome and Process Assessment, Health Care , Research , Social Values , Transfer, Psychology
16.
J Appl Psychol ; 93(5): 1062-81, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18808226

ABSTRACT

The authors conducted a content analysis of all articles published in the Journal of Applied Psychology and Personnel Psychology from January 1963 to May 2007 (N = 5,780) to identify the relative attention devoted to each of 15 broad topical areas and 50 more specific subareas in the field of industrial and organizational (I-O) psychology. Results revealed that (a) some areas have become more (or less) popular over time, whereas others have not changed much, and (b) there are some lagged relationships between important societal issues that involve people and work settings (i.e., human-capital trends) and I-O psychology research that addresses them. Also, much I-O psychology research does not address human-capital trends. Extrapolating results from the past 45 years to the next decade suggests that the field of I-O psychology is not likely to become more visible or more relevant to society at large or to achieve the lofty goals it has set for itself unless researchers, practitioners, universities, and professional organizations implement significant changes. In the aggregate, the changes address the broad challenge of how to narrow the academic-practitioner divide.


Subject(s)
Psychology, Industrial/history , Psychology, Industrial/statistics & numerical data , Psychology, Social/history , Psychology, Social/statistics & numerical data , Research , History, 20th Century , History, 21st Century , Humans , Research/history , Research/statistics & numerical data , Research/trends
17.
J Appl Psychol ; 90(6): 1069-83, 2005 Nov.
Article in English | MEDLINE | ID: mdl-16316266

ABSTRACT

The authors present a model that explains how subordinates perceive the power of their supervisors and the causal mechanisms by which these perceptions translate into subordinate outcomes. Drawing on identity and resource-dependence theories, the authors propose that supervisors have power over their subordinates when they control resources needed for the subordinates' enactment and maintenance of current and desired identities. The joint effect of perceptions of supervisor power and supervisor intentions to provide such resources leads to 4 conditions ranging from highly functional to highly dysfunctional: confirmation, hope, apathy, and progressive withdrawal. Each of these conditions is associated with specific outcomes such as the quality of the supervisor-subordinate relationship, turnover, and changes in the type and centrality of various subordinate identities.


Subject(s)
Hierarchy, Social , Individuality , Models, Psychological , Organization and Administration , Power, Psychological , Social Perception , Absenteeism , Aspirations, Psychological , Humans , Job Satisfaction , Motivation , Personnel Turnover , Self Concept , Social Identification
18.
J Appl Psychol ; 90(1): 94-107, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15641892

ABSTRACT

The authors conducted a 30-year review (1969-1998) of the size of moderating effects of categorical variables as assessed using multiple regression. The median observed effect size (f(2)) is only .002, but 72% of the moderator tests reviewed had power of .80 or greater to detect a targeted effect conventionally defined as small. Results suggest the need to minimize the influence of artifacts that produce a downward bias in the observed effect size and put into question the use of conventional definitions of moderating effect sizes. As long as an effect has a meaningful impact, the authors advise researchers to conduct a power analysis and plan future research designs on the basis of smaller and more realistic targeted effect sizes.


Subject(s)
Psychology/statistics & numerical data , Regression Analysis , Humans , Reproducibility of Results , Sample Size
SELECTION OF CITATIONS
SEARCH DETAIL
...