In this study, two undergraduate student researchers test how different groups of people perceive the distinction betwen satire and fake news differently. Through online surveys and focus groups, they quizzed participants on the difference between fake news and satire. They showed them 27 screenshots of posts in a simulated Facebook News Feed and gave each participant 12 seconds to read a story post and choose whether it was satire or fake news. They found that the youngest and oldest participants were least likely to accurately distinguish between the two categories. Women and more educated people fared better, while political orientation did not have much of an effect on the outcome.
This high-level review looks at all the major scientific literature on the relationship between social media, political polarization and disinformation. The paper is broken up into six sections: online political conversations, consequences of exposure to disinformation and propaganda in online settings, producers of disinformation, strategies and tactics of spreading disinformation through online platforms, online content and political polarization and how misinformation and polarization affect American democracy. Taken together, the paper — whose works cited section is nearly a fourth of its length — is an expansive look at the literature on misinformation, but the authors identify several key areas for further research. These include: better measures of the effects of misinformation exposure, multi-platform research, misinformation in images, the generalizability of U.S. findings, the effect of ideology on responses to misinformation, laws against spreading misinformation, better understanding of bots and the role of political elites in spreading misinformation.
In this comprehensive law practicum, student researchers at Stanford University surveyed the ways in which Facebook, Google, Twitter and Reddit helped facilitate the spread of fake news during the 2016 U.S election. They divided their report into separate sections for each platform, drawing upon user experiments, search analyses, interviews with major news organizations and current and former government officials and a review of steps already taken to address misinformation. For Facebook, researchers recommended the platform continue investing in its fact-checking partnerships to cut down on fake news readership. For Google, authors recommend implementing more effective algorithmic monitoring to avoid surfacing hoaxes. For Twitter, researchers recommended the platform pilot a crowd-sourced fact-checking and flagging system to decrease the spread of fake news links. Finally, for Reddit, the authors said the platform should work to decrease the visibility and reach of subreddits that are known to regularly foster conspiracy theories.
In this study, researchers analyze tweets from the run-up and aftermath of the 2016 U.S. presidential election to determine how the spread of misinformation relates to that of fact-checking. They use data collected from a tool they built called Hoaxy, which enables users to analyze large batches of tweets from Twitter’s API using different queries. Researchers found that only 5.8 percent of the two million retweets produced by several hundred thousand accounts in their dataset shared links to fact-checking content — 1/17th of the reach of misinforming tweets. To evaluate whether something was misinformation, the authors used lists of misinformation sites from fact-checkers. By looking at the retweet network, as well as leveraging the tool Botomer, they also find that the reach of bots spreading information outscaled that of fact-checking. Limitations include the fact that the sources were limited to Twitter based on its API and that not all of the claims analyzed were verifiably false.
In this study, researchers seek to understand the difference between tweets containing fake news and those that don’t by analyzing their metadata. Specifically, they use a sample of more than 1.5 million viral tweets collected on the 2016 U.S. election day that used one of three different hashtags and/or mentioned Hillary Clinton or Donald Trump. Within that sample, they isolated which tweets went viral by counting the retweets of each in comparison to the whole — only .01 percent went viral and 10 percent of those contained fake news. What the authors found was that, in tweets containing fake news, accounts were more likely to be unverified, have more followers, use less mentions, support Trump and tweet more links. The authors posit that their findings could help technology companies and other researchers develop ways to automatically block misinformation on Twitter.