Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents

被引:632
作者
Coppock, Alexander [1 ]
McClellan, Oliver A. [2 ]
机构
[1] Yale Univ, New Haven, CT USA
[2] Columbia Univ, New York, NY USA
关键词
Survey experiments; convenience samples; generalizability; MECHANICAL TURK; PERSONALITY; POPULATION; ATTITUDES;
D O I
10.1177/2053168018822174
中图分类号
D0 [政治学、政治理论];
学科分类号
030201 [政治学理论];
摘要
Researchers have increasingly turned to online convenience samples as sources of survey responses that are easy and inexpensive to collect. As reliance on these sources has grown, so too have concerns about the use of convenience samples in general and Amazon's Mechanical Turk in particular. We distinguish between "external validity" and theoretical relevance, with the latter being the more important justification for any data collection strategy. We explore an alternative source of online convenience samples, the Lucid Fulcrum Exchange, and assess its suitability for online survey experimental research. Our point of departure is the 2012 study by Berinsky, Huber, and Lenz that compares Amazon's Mechanical Turk to US national probability samples in terms of respondent characteristics and treatment effect estimates. We replicate these same analyses using a large sample of survey responses on the Lucid platform. Our results indicate that demographic and experimental findings on Lucid track well with US national benchmarks, with the exception of experimental treatments that aim to dispel the "death panel" rumor regarding the Affordable Care Act. We conclude that subjects recruited from the Lucid platform constitute a sample that is suitable for evaluating many social scientific theories, and can serve as a drop-in replacement for many scholars currently conducting research on Mechanical Turk or other similar platforms.
引用
收藏
页数:14
相关论文
共 44 条
[1]
[Anonymous], WORKING PAPER
[2]
[Anonymous], J EDITORS TRANSPAREN
[3]
[Anonymous], 2009, SSRN Electron. J, DOI [10.2139/ssrn.1498843, DOI 10.1017/CBO9780511921452.004, DOI 10.2139/SSRN.1498843]
[4]
The viability of crowdsourcing for survey research [J].
Behrend, Tara S. ;
Sharek, David J. ;
Meade, Adam W. ;
Wiebe, Eric N. .
BEHAVIOR RESEARCH METHODS, 2011, 43 (03) :800-813
[5]
Rumors and Health Care Reform: Experiments in Political Misinformation [J].
Berinsky, Adam J. .
BRITISH JOURNAL OF POLITICAL SCIENCE, 2017, 47 (02) :241-262
[6]
Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk [J].
Berinsky, Adam J. ;
Huber, Gregory A. ;
Lenz, Gabriel S. .
POLITICAL ANALYSIS, 2012, 20 (03) :351-368
[7]
Recruiting large online samples in the United States and India: Facebook, Mechanical Turk, and Qualtrics [J].
Boas, Taylor C. ;
Christenson, Dino P. ;
Glick, David M. .
POLITICAL SCIENCE RESEARCH AND METHODS, 2020, 8 (02) :232-250
[8]
Partisan Bias in Factual Beliefs about Politics [J].
Bullock, John G. ;
Gerber, Alan S. ;
Hill, Seth J. ;
Huber, Gregory A. .
QUARTERLY JOURNAL OF POLITICAL SCIENCE, 2015, 10 (04) :519-578
[9]
Using Nonnaive Participants Can Reduce Effect Sizes [J].
Chandler, Jesse ;
Paolacci, Gabriele ;
Peer, Eyal ;
Mueller, Pam ;
Ratliff, Kate A. .
PSYCHOLOGICAL SCIENCE, 2015, 26 (07) :1131-1139