We collect (and briefly explain) major studies on fact-checking, fake news and misinformation
Attaching warnings to stories on social media platforms like Facebook that have been debunked by fact-checkers modestly decreases their perceived accuracy while increasing the perceived validity of untagged fake stories.
It’s often not enough for fact-checkers to simply correct online misinformation — they also have to create detailed counter-messages and alternative narratives if they want to change their audiences’ minds.
There's a positive correlation between analytical thinking and the capacity to distinguish fake news from real.
On Twitter, those who follow or are followed by people who correct their facts are more likely to accept the correction than those who are confronted by strangers.
Trust and Distrust in Online Fact-Checking Services
This study evaluated online user perceptions of Factcheck.org, Snopes.com and StopFake.org in order to get a picture of what commenters are saying about active fact-checkers.
This study measures the extent to which algorithms and comments on Facebook that link to fact checks can effectively correct users' misconceptions about health news. Researchers tested this by exposing 613 survey participants to simulated news feeds with three condition.
Perceived social presence reduces fact-checking
This study of eight experiments aims to measure how social presence affects the way that people verify information online. It found that, when people think they're being judged by a large group of people online, they're less likely to fact-check claims than when they're alone.
This study attempts to determine the most effective way to correct misinformation on social media by testing both the content of corrections and how they're presented.
Drawing upon a Twitter dataset from the 2012 United States presidential election, this study examines the motives that partisan social media have to share fact checks.