Abstract
Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment 2 corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.
Original language | English |
---|---|
Pages (from-to) | 1023-1031 |
Number of pages | 9 |
Journal | Behavior Research Methods |
Volume | 46 |
Issue number | 4 |
DOIs | |
State | Published - Dec 2014 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2013, Psychonomic Society, Inc.
Keywords
- Amazon Mechanical Turk
- Data quality
- Online research
- Reputation