Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
2.
Behav Res Methods ; 55(8): 4048-4067, 2023 12.
Article in English | MEDLINE | ID: mdl-37217711

ABSTRACT

To understand human behavior, social scientists need people and data. In the last decade, Amazon's Mechanical Turk (MTurk) emerged as a flexible, affordable, and reliable source of human participants and was widely adopted by academics. Yet despite MTurk's utility, some have questioned whether researchers should continue using the platform on ethical grounds. The brunt of their concern is that people on MTurk are financially insecure, subject to abuse, and earn inhumane wages. We investigated these issues with two representative probability surveys of the U.S. MTurk population (N = 4094). The surveys revealed: (1) the financial situation of people on MTurk mirrors the general population, (2) most participants do not find MTurk stressful or requesters abusive, and (3) MTurk offers flexibility and benefits that most people value above other options for work. People reported it is possible to earn more than $10 per hour and said they would not trade the flexibility of MTurk for less than $25 per hour. Altogether, our data are important for assessing whether MTurk is an ethical place for research.


Subject(s)
Crowdsourcing , Humans , Behavioral Research , Surveys and Questionnaires , Salaries and Fringe Benefits
3.
Behav Res Methods ; 55(8): 3953-3964, 2023 12.
Article in English | MEDLINE | ID: mdl-36326997

ABSTRACT

Maintaining data quality on Amazon Mechanical Turk (MTurk) has always been a concern for researchers. These concerns have grown recently due to the bot crisis of 2018 and observations that past safeguards of data quality (e.g., approval ratings of 95%) no longer work. To address data quality concerns, CloudResearch, a third-party website that interfaces with MTurk, has assessed ~165,000 MTurkers and categorized them into those that provide high- (~100,000, Approved) and low- (~65,000, Blocked) quality data. Here, we examined the predictive validity of CloudResearch's vetting. In a pre-registered study, participants (N = 900) from the Approved and Blocked groups, along with a Standard MTurk sample (95% HIT acceptance ratio, 100+ completed HITs), completed an array of data-quality measures. Across several indices, Approved participants (i) identified the content of images more accurately, (ii) answered more reading comprehension questions correctly, (iii) responded to reversed coded items more consistently, (iv) passed a greater number of attention checks, (v) self-reported less cheating and actually left the survey window less often on easily Googleable questions, (vi) replicated classic psychology experimental effects more reliably, and (vii) answered AI-stumping questions more accurately than Blocked participants, who performed at chance on multiple outcomes. Data quality of the Standard sample was generally in between the Approved and Blocked groups. We discuss how MTurk's Approval Rating system is no longer an effective data-quality control, and we discuss the advantages afforded by using the Approved group for scientific studies on MTurk.


Subject(s)
Crowdsourcing , Data Accuracy , Humans , Surveys and Questionnaires , Self Report , Attention , Crowdsourcing/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...