In this comprehensive law practicum, student researchers at Stanford University surveyed the ways in which Facebook, Google, Twitter and Reddit helped facilitate the spread of fake news during the 2016 U.S election. They divided their report into separate sections for each platform, drawing upon user experiments, search analyses, interviews with major news organizations and current and former government officials and a review of steps already taken to address misinformation. For Facebook, researchers recommended the platform continue investing in its fact-checking partnerships to cut down on fake news readership. For Google, authors recommend implementing more effective algorithmic monitoring to avoid surfacing hoaxes. For Twitter, researchers recommended the platform pilot a crowd-sourced fact-checking and flagging system to decrease the spread of fake news links. Finally, for Reddit, the authors said the platform should work to decrease the visibility and reach of subreddits that are known to regularly foster conspiracy theories.
In this study, researchers seek to understand the difference between tweets containing fake news and those that don’t by analyzing their metadata. Specifically, they use a sample of more than 1.5 million viral tweets collected on the 2016 U.S. election day that used one of three different hashtags and/or mentioned Hillary Clinton or Donald Trump. Within that sample, they isolated which tweets went viral by counting the retweets of each in comparison to the whole — only .01 percent went viral and 10 percent of those contained fake news. What the authors found was that, in tweets containing fake news, accounts were more likely to be unverified, have more followers, use less mentions, support Trump and tweet more links. The authors posit that their findings could help technology companies and other researchers develop ways to automatically block misinformation on Twitter.
This paper, presented at the International Conference on Asian Digital Libraries, aims to uncover the types of rumors and "counter-rumors" (or debunks) that surfaced on Twitter following the falsely reported death of former Singaporean Prime Minister Lee Kuan Yew. Researchers analyzed 4,321 tweets about Lee's death and found six categories of rumors, four categories of counter-rumors and two categories belonging to neither. With more counter-rumors than rumors, the study's results suggest that Twitter users often attempt to stop the spread of false rumors online.
Researchers examined a final selection of 20 experiments from 1994 to 2015 that address fake social and political news accounts in order to determine the most effective ways to combat beliefs based on misinformation. The headline finding is that correcting misinformation is possible, but it's often not as strong as the misinformation itself. The analysis has several take-aways for fact-checkers, most notably the importance of creating counter-messages and alternative narratives if they want to change their audiences’ minds and getting on to the correction as quickly as possible.