Arbeitspapier
We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments
Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erodes substantially the credibility of these studies' conclusions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.
- Language
-
Englisch
- Bibliographic citation
-
Series: I4R Discussion Paper Series ; No. 8
- Classification
-
Wirtschaft
Economic Methodology
Estimation: General
Econometric and Statistical Methods: Special Topics: General
Design of Experiments: General
- Subject
-
online crowd-sourcing platforms
Amazon Mechanical Turk
p-hacking
publication bias
statistical power
research credibility
- Event
-
Geistige Schöpfung
- (who)
-
Brodeur, Abel
Cook, Nikolai
Heyes, Anthony
- Event
-
Veröffentlichung
- (who)
-
Institute for Replication (I4R)
- (where)
-
s.l.
- (when)
-
2022
- Handle
- Last update
-
10.03.2025, 11:42 AM CET
Data provider
ZBW - Deutsche Zentralbibliothek für Wirtschaftswissenschaften - Leibniz-Informationszentrum Wirtschaft. If you have any questions about the object, please contact the data provider.
Object type
- Arbeitspapier
Associated
- Brodeur, Abel
- Cook, Nikolai
- Heyes, Anthony
- Institute for Replication (I4R)
Time of origin
- 2022