This high-level review looks at all the major scientific literature on the relationship between social media, political polarization and disinformation. The paper is broken up into six sections: online political conversations, consequences of exposure to disinformation and propaganda in online settings, producers of disinformation, strategies and tactics of spreading disinformation through online platforms, online content and political polarization and how misinformation and polarization affect American democracy. Taken together, the paper — whose works cited section is nearly a fourth of its length — is an expansive look at the literature on misinformation, but the authors identify several key areas for further research. These include: better measures of the effects of misinformation exposure, multi-platform research, misinformation in images, the generalizability of U.S. findings, the effect of ideology on responses to misinformation, laws against spreading misinformation, better understanding of bots and the role of political elites in spreading misinformation.
In this comprehensive law practicum, student researchers at Stanford University surveyed the ways in which Facebook, Google, Twitter and Reddit helped facilitate the spread of fake news during the 2016 U.S election. They divided their report into separate sections for each platform, drawing upon user experiments, search analyses, interviews with major news organizations and current and former government officials and a review of steps already taken to address misinformation. For Facebook, researchers recommended the platform continue investing in its fact-checking partnerships to cut down on fake news readership. For Google, authors recommend implementing more effective algorithmic monitoring to avoid surfacing hoaxes. For Twitter, researchers recommended the platform pilot a crowd-sourced fact-checking and flagging system to decrease the spread of fake news links. Finally, for Reddit, the authors said the platform should work to decrease the visibility and reach of subreddits that are known to regularly foster conspiracy theories.
This study examines the how U.S. President Donald Trump's false claims have affected the truthfulness of French politicians' statements — and how political news coverage has adjusted in both countries. Based on semi-structured interviews with reporters, fact-checkers and editors, the author found that journalists have largely struggled to come up with sustainable ways to address repeated falsehoods — which interviewees agreed had risen in both France and the U.S. in recent years. Pervasive lying has disrupted the traditional relationship between sources and reporters, making it harder for journalists to tell true from false. Trump’s ascension and Marine Le Pen’s campaign also caused much hand-wringing over which words to use when citing falsehoods in news articles, with American publications using “lie” more often than French counterparts.
In this study, researchers seek to understand the difference between tweets containing fake news and those that don’t by analyzing their metadata. Specifically, they use a sample of more than 1.5 million viral tweets collected on the 2016 U.S. election day that used one of three different hashtags and/or mentioned Hillary Clinton or Donald Trump. Within that sample, they isolated which tweets went viral by counting the retweets of each in comparison to the whole — only .01 percent went viral and 10 percent of those contained fake news. What the authors found was that, in tweets containing fake news, accounts were more likely to be unverified, have more followers, use less mentions, support Trump and tweet more links. The authors posit that their findings could help technology companies and other researchers develop ways to automatically block misinformation on Twitter.