What that new major study about fake news means (and doesn't mean) for fact-checkers
A major study on Americans’ consumption of fake news during the 2016 election was released last week — and rightly graced headlines from dozens of media outlets.
Yet the great variety in these headlines made it seem like there were at least two or three different studies, not one. Take the following selection:
“Fighting Fake News Is Not The Solution,” was how The New Yorker covered it.
“Is Fake News Actually Not A Big Deal?” countered Vice.
“Just a small group of Americans consume fake news, according to a new study,” was how Quartz summarized the study.
“The first scientific fake news study is here to confirm your worst fears about America,” went a more worrying headline from Mashable.
So which is it? Is fake news endemic and irresistible, or an overblown concern targeting a small subset of the American population? Should we conclude that “fact checking hasn't worked at all,” as Mashable does — or that the study proves the “ineffectiveness” of the same format, as The New Yorker’s Masha Gessen writes?
I don’t want to make this too much about headlines (as important as they are). News studies are hard to boil down in a short, shareable sentence and fact-checkers have on occasion taken issue with how I myself characterize new studies.
But given how contentious the whole conversation around misinformation and fact-twisting has become, we ought to be more balanced about what new research does and doesn’t tell us.
To start with, the working paper released by Andrew Guess, Brendan Nyhan and Jason Reifler — of Princeton, Dartmouth College and the University of Exeter, respectively — is valuable because it collected real-life browsing data. (Disclosure: Poynter has apparently funded some of this research, I learn from the acknowledgments).
The three academics worked off web traffic collected with consent from a national sample of 2,525 Americans between Oct. 4 and Nov. 7, 2016. They discovered that fake news websites did reach a relatively large audience, equivalent to 27.4 percent of their sample. They also found about the same percentage visited a fact-checking website.
So if we accept the assurances about the representative nature of this sample, about one-quarter of the country read fake news and one-quarter read fact checks (previous work has found that fake news articles tend to outperform fact-checking articles in raw traffic figures). Moreover, most fake news websites don’t seem to have built loyal followings, as their articles represented a much smaller 2.6 percent of all the articles Americans read. So far, the case of panic seems overblown, then.
The challenge is that these quarters of the population don’t overlap much. 13.3 percent of the sample visited fake news websites but not fact-checking websites (see figure below).
Plus, some people consumed a lot more fake news than others. Approximately 60 percent of all visits to fake news websites came from the most conservative 10 percent of readers. (This may, in part, be defined by the choice of fake news websites monitored, which originates from this 2017 paper and may oversample the hoaxers who got targeted by fact-checkers and/or had viral hits.)
To wrap it all up, Guess, Nyhan and Reifler found that Facebook was likely “the most important mechanism facilitating the spread of fake news,” given that it was one of the three websites users were reading right before moving on to misinformation 22 percent of the time.
The really bad news for fact-checkers is that “we almost never observe respondents reading a fact-check of a specific claim in a fake news article that they read.” So for all we know Donald Trump-supporting fake news consumers may have just gone to Factcheck.org when it covered Hillary Clinton.
So what does this all mean for fact-checkers?
The study provides real-life evidence of what fact-checkers have feared to be true for a while now: Audiences could be picking and choosing the fact checks they read depending on their partisanship, and the most extreme partisans may not be visiting their sites at all.
The latter should be of less concern than the former. Just over 10 percent of Americans read fake news websites but didn’t consult a fact-checking website before the 2016 election. Considering that almost a quarter of Americans aren’t certain that the Earth revolves around the Sun, these numbers shouldn’t be terribly upsetting.
What is more concerning is the mismatch between the specific falsehood and its related correction. The paper found not one instance — not one! — of a user reading a specific falsehood then reading the related fact-checking.
This probably sounds worse than it is for two reasons: Other media outlets were also debunking fake news stories or carrying fact-checkers’ work, not just the dedicated projects, so users may have seen a correction that way. In addition, fact-checkers primarily fact-check public claims rather than debunk hoaxes (though less and less so and with Snopes being a notable exception). It’d be worth understanding how many Americans read popular fact checks about Trump claims, for instance, having previously heard his misleading claim.
Still, this doesn’t absolve fact-checkers from urgently determining the composition and motivation of their audience, as well as to explore creative avenues to reach the consumers of specific fake news articles that have been fact-checked. (Automation should help: If a Twitter bot automatically replied with a fact-check any time a fake news story was tweeted, people would at least see the fact check).
The second crucial takeaway for fact-checkers is that Facebook clearly matters in the battle against misinformation. As problematic as the relationship has been over the past year, it appears to be too big to fail.
So there we have it. Fake news has a relatively large audience, but it went deep with only a small portion of Americans. Fact-checkers also draw large audiences, but don’t seem to bring the corrections to those who most need to read them.
Doesn’t quite fit in a fun headline for social media, though.