Fighting misinformation on social media using crowdsourced judgments of news source quality

被引:429
作者
Pennycook, Gordon [1 ]
Rand, David G. [2 ,3 ]
机构
[1] Univ Regina, Hill Levene Sch Business, Regina, SK S4S 0A2, Canada
[2] MIT, Sloan Sch, Cambridge, MA 02138 USA
[3] MIT, Dept Brain & Cognit Sci, Cambridge, MA 02138 USA
关键词
news media; social media; media trust; misinformation; fake news; CONTINUED INFLUENCE;
D O I
10.1073/pnas.1806781116
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content ("fake news"). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans-mostly due to distrust of mainstream sources by Republicans-every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low-and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
引用
收藏
页码:2521 / 2526
页数:6
相关论文
共 31 条
[1]  
[Anonymous], 2017, IMPLIED TRUTH EFFECT
[2]  
[Anonymous], 2016, News Use across Social Media Platforms 2016
[3]  
[Anonymous], PERSPECTIVES PSYCHOL
[4]   Belief in Fake News is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking [J].
Bronstein, Michael V. ;
Pennycook, Gordon ;
Bear, Adam ;
Rand, David G. ;
Cannon, Tyrone D. .
JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION, 2019, 8 (01) :108-117
[5]  
Coppock A, 2018, POLIT SCI RES METHOD
[6]  
Coppock A, 2018, VALIDATING DEMOGRAPH
[7]   At Least Bias Is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives [J].
Ditto, Peter H. ;
Liu, Brittany S. ;
Clark, Cory J. ;
Wojcik, Sean P. ;
Chen, Eric E. ;
Grady, Rebecca H. ;
Celniker, Jared B. ;
Zinger, Joanne F. .
PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2019, 14 (02) :273-291
[8]   Reminders and Repetition of Misinformation: Helping or Hindering Its Retraction? [J].
Ecker, Ullrich K. H. ;
Hogan, Joshua L. ;
Lewandowsky, Stephan .
JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION, 2017, 6 (02) :185-192
[9]   Explicit warnings reduce but do not eliminate the continued influence of misinformation [J].
Ecker, Ullrich K. H. ;
Lewandowsky, Stephan ;
Tang, David T. W. .
MEMORY & COGNITION, 2010, 38 (08) :1087-1100
[10]  
Faris R., 2017, PARTISANSHIP PROPAGA