Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Am Psychol ; 78(1): 36-49, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35157476

RESUMO

Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Inteligência Artificial , Humanos
2.
Psychol Methods ; 2022 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-36006759

RESUMO

Content analysis is a common and flexible technique to quantify and make sense of qualitative data in psychological research. However, the practical implementation of content analysis is extremely labor-intensive and subject to human coder errors. Applying natural language processing (NLP) techniques can help address these limitations. We explain and illustrate these techniques to psychological researchers. For this purpose, we first present a study exploring the creation of psychometrically meaningful predictions of human content codes. Using an existing database of human content codes, we build an NLP algorithm to validly predict those codes, at generally acceptable standards. We then conduct a Monte-Carlo simulation to model how four dataset characteristics (i.e., sample size, unlabeled proportion of cases, classification base rate, and human coder reliability) influence content classification performance. The simulation indicated that the influence of sample size and unlabeled proportion on model classification performance tended to be curvilinear. In addition, base rate and human coder reliability had a strong effect on classification performance. Finally, using these results, we offer practical recommendations to psychologists on the necessary dataset characteristics to achieve valid prediction of content codes to guide researchers on the use of NLP models to replace human coders in content analysis research. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

3.
J Appl Psychol ; 107(10): 1655-1677, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34672652

RESUMO

Games, which can be defined as an externally structured, goal-directed type of play, are increasingly being used in high-stakes testing contexts to measure targeted constructs for use in the selection and promotion of employees. Despite this increasing popularity, little is known about how theory-driven game-based assessments (GBA), those designed to reflect a targeted construct, should be designed, or their potential for achieving their simultaneous goals of positive reactions and high-quality psychometric measurement. In the present research, we develop a theory of GBA design by integrating game design and development theory from human-computer interaction with psychometric theory. Next, we test measurement characteristics, prediction of performance, fairness, and reactions of a GBA designed according to this theory to measure latent general intelligence (g). Using an academic sample with GPA data (N = 633), we demonstrate convergence between latent GBA performance and g (ß = .97). Adding an organizational sample with supervisory ratings of job performance (N = 49), we show GBA prediction of both GPA (r = .16) and supervisory ratings (r = .29). We also show incremental prediction of GPA using unit-weighted composites of the g test battery beyond that of the g-GBA battery but not the reverse. We also show the presence of similar adverse impact for both the traditional test battery and GBA but the absence of differential prediction of criteria. Reactions were more positive across all measures for the g-GBA compared to the traditional test battery. Overall, results support GBA design theory as a promising foundation from which to build high-quality theory-driven GBAs. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Desempenho Profissional , Cognição , Humanos , Inteligência , Motivação , Psicometria
4.
PLoS One ; 16(1): e0245460, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33471835

RESUMO

In the social and cognitive sciences, crowdsourcing provides up to half of all research participants. Despite this popularity, researchers typically do not conceptualize participants accurately, as gig-economy worker-participants. Applying theories of employee motivation and the psychological contract between employees and employers, we hypothesized that pay and pay raises would drive worker-participant satisfaction, performance, and retention in a longitudinal study. In an experiment hiring 359 Amazon Mechanical Turk Workers, we found that initial pay, relative increase of pay over time, and overall pay did not have substantial influence on subsequent performance. However, pay significantly predicted participants' perceived choice, justice perceptions, and attrition. Given this, we conclude that worker-participants are particularly vulnerable to exploitation, having relatively low power to negotiate pay. Results of this study suggest that researchers wishing to crowdsource research participants using MTurk might not face practical dangers such as decreased performance as a result of lower pay, but they must recognize an ethical obligation to treat Workers fairly.


Assuntos
Crowdsourcing/economia , Reembolso de Incentivo , Pesquisa/economia , Adulto , Feminino , Humanos , Estudos Longitudinais , Masculino , Motivação , Satisfação Pessoal , Análise de Regressão
5.
Psychol Methods ; 21(4): 475-492, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27213980

RESUMO

The term big data encompasses a wide range of approaches of collecting and analyzing data in ways that were not possible before the era of modern personal computing. One approach to big data of great potential to psychologists is web scraping, which involves the automated collection of information from webpages. Although web scraping can create massive big datasets with tens of thousands of variables, it can also be used to create modestly sized, more manageable datasets with tens of variables but hundreds of thousands of cases, well within the skillset of most psychologists to analyze, in a matter of hours. In this article, we demystify web scraping methods as currently used to examine research questions of interest to psychologists. First, we introduce an approach called theory-driven web scraping in which the choice to use web-based big data must follow substantive theory. Second, we introduce data source theories, a term used to describe the assumptions a researcher must make about a prospective big data source in order to meaningfully scrape data from it. Critically, researchers must derive specific hypotheses to be tested based upon their data source theory, and if these hypotheses are not empirically supported, plans to use that data source should be changed or eliminated. Third, we provide a case study and sample code in Python demonstrating how web scraping can be conducted to collect big data along with links to a web tutorial designed for psychologists. Fourth, we describe a 4-step process to be followed in web scraping projects. Fifth and finally, we discuss legal, practical and ethical concerns faced when conducting web scraping projects. (PsycINFO Database Record


Assuntos
Armazenamento e Recuperação da Informação , Internet , Psicologia , Sistemas de Gerenciamento de Base de Dados , Humanos , Pesquisa , Interface Usuário-Computador
6.
J Appl Psychol ; 96(1): 202-10, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20718510

RESUMO

A large sample (N = 32,311) of applicants for managerial positions at a nationwide retailer completed a personality test online over the course of several years. A new type of faking was observed in their responses: the use of only extreme responses (all 1s and 5s), which is labeled blatant extreme responding (BER). An increase in BER over time was observed for internal but not for external applicants, suggesting the presence of a coaching rumor. A subsample of internal applicants chose to retake the test after initial failure. These individuals showed substantial increases in both test scores and rate of BER, with higher prevalence of faking at retest than the main sample. To reduce faking, an interactive warning was implemented one year after the initial administration. Differing patterns of faking were observed before and after warnings, allowing for an examination of warning effectiveness in the presence of a coaching rumor. Results suggest that faking increases over time as the coaching rumor spreads but that warnings deter this spread. Evidence suggests that faking is indeed a problem in real-world selection settings.


Assuntos
Enganação , Determinação da Personalidade , Seleção de Pessoal , Humanos , Personalidade , Determinação da Personalidade/normas , Seleção de Pessoal/métodos , Seleção de Pessoal/normas
7.
Behav Res Methods ; 40(3): 665-72, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18697661

RESUMO

The Research Explicator for oNline Databases (TREND) tool was developed out of a need to quantify large research literatures rapidly and objectively on the basis of online research database output. By parsing such output with TREND, a researcher can in minutes extract the most commonly cited articles, the most frequently published authors, a distribution of publication dates, and a variety of other information from a research literature several thousand articles in size. This tool thus enables an increase in productivity both for researchers venturing into new areas of interest and for advisors and instructors putting together core reading lists. The processing of citations from articles represents a unique challenge, however, because deviations from strict APA formatting cause problems that are sometimes difficult to correct mechanically. A case study of one particularly troublesome citation (Baron & Kenny, 1986) is presented. Usage and implications are discussed.


Assuntos
Processamento Eletrônico de Dados , Pesquisa/instrumentação , Software , Humanos , Fatores de Tempo
8.
J Appl Psychol ; 92(2): 538-44, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17371098

RESUMO

The purpose of this research report is to highlight a unique set of issues that arise when considering the effects of range restriction in the context of estimating predictor intercorrelations. Three approaches are used to illustrate the issue: simulation, a concrete applied example, and a reanalysis of a meta-analysis of ability-interview correlations. The general conclusion is that a predictor intercorrelation can differ dramatically from the population value when both predictors are used in a composite that is used operationally for selection. The compensatory nature of a composite means that low scorers on one predictor can only obtain high scores on the composite if they obtain very high scores on the other predictor; this phenomenon distorts the correlation between the predictors.


Assuntos
Modelos Psicológicos , Psicologia/métodos , Humanos , Psicologia/estatística & dados numéricos , Psicometria/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...