Social Media

Poynter Results

  • The distinction between fake news and satire varies from person to person

    In this study, two undergraduate student researchers test how different groups of people perceive the distinction betwen satire and fake news differently. Through online surveys and focus groups, they quizzed participants on the difference between fake news and satire. They showed them 27 screenshots of posts in a simulated Facebook News Feed and gave each participant 12 seconds to read a story post and choose whether it was satire or fake news. They found that the youngest and oldest participants were least likely to accurately distinguish between the two categories. Women and more educated people fared better, while political orientation did not have much of an effect on the outcome.

    Study Title
    Satire or Fake News: Social Media Consumers' Socio-Demographics Decide
    Study Authors
    Michele Bedard, Chianna Schoenthaler
    Peer Reviewed
    No
    Sample
    Non-representative
    Inferential approach
    Experimental
    Number of studies citing
    0
  • A comprehensive look at misinformation research

    This high-level review looks at all the major scientific literature on the relationship between social media, political polarization and disinformation. The paper is broken up into six sections: online political conversations, consequences of exposure to disinformation and propaganda in online settings, producers of disinformation, strategies and tactics of spreading disinformation through online platforms, online content and political polarization and how misinformation and polarization affect American democracy. Taken together, the paper — whose works cited section is nearly a fourth of its length — is an expansive look at the literature on misinformation, but the authors identify several key areas for further research. These include: better measures of the effects of misinformation exposure, multi-platform research, misinformation in images, the generalizability of U.S. findings, the effect of ideology on responses to misinformation, laws against spreading misinformation, better understanding of bots and the role of political elites in spreading misinformation.

    Study Title
    Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature
    Study Publication Date
    Study Authors
    Joshua A. Tucker, Andrew Guess, Pablo Barber, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, Brendan Nyhan
    Peer Reviewed
    No
    Sample
    Representative
    Inferential approach
    Qualitative
    Number of studies citing
    0
  • How four of the biggest tech platforms spread fake news during the 2016 U.S. election

    In this comprehensive law practicum, student researchers at Stanford University surveyed the ways in which Facebook, Google, Twitter and Reddit helped facilitate the spread of fake news during the 2016 U.S election. They divided their report into separate sections for each platform, drawing upon user experiments, search analyses, interviews with major news organizations and current and former government officials and a review of steps already taken to address misinformation. For Facebook, researchers recommended the platform continue investing in its fact-checking partnerships to cut down on fake news readership. For Google, authors recommend implementing more effective algorithmic monitoring to avoid surfacing hoaxes. For Twitter, researchers recommended the platform pilot a crowd-sourced fact-checking and flagging system to decrease the spread of fake news links. Finally, for Reddit, the authors said the platform should work to decrease the visibility and reach of subreddits that are known to regularly foster conspiracy theories.

    Study Title
    Fake News and Misinformation: The Roles of the Nation’s Digital Newsstands, Facebook, Google, Twitter and Reddit
    Study Authors
    Jacob Finkel, Steven Jiang, Mufan Luo, Rebecca Mears, Danaë Metaxa-Kakavouli, Camille Peeples, Brendan Sasso, Arjun Shenoy, Vincent Sheu, Nicolás Torres-Echeverry
    Journal
    Stanford Law School Fake News and Misinformation Policy Lab Practicum
    Peer Reviewed
    No
    Sample
    Representative
    Number of studies citing
    0
  • On Twitter, misinformation outscaled fact-checking leading up to 2016 U.S. election

    In this study, researchers analyze tweets from the run-up and aftermath of the 2016 U.S. presidential election to determine how the spread of misinformation relates to that of fact-checking. They use data collected from a tool they built called Hoaxy, which enables users to analyze large batches of tweets from Twitter’s API using different queries. Researchers found that only 5.8 percent of the two million retweets produced by several hundred thousand accounts in their dataset shared links to fact-checking content — 1/17th of the reach of misinforming tweets. To evaluate whether something was misinformation, the authors used lists of misinformation sites from fact-checkers. By looking at the retweet network, as well as leveraging the tool Botomer, they also find that the reach of bots spreading information outscaled that of fact-checking. Limitations include the fact that the sources were limited to Twitter based on its API and that not all of the claims analyzed were verifiably false.

    Study Title
    Anatomy of an online misinformation network
    Study Publication Date
    Study Authors
    Chengcheng Shao, Y Pik-Mai Hui, Y Lei Wang, Xinwen Jiang, Alessandro Flammini, Filippo Menczer, Giovanni Luca Ciampaglia

    Keywords:

    Peer Reviewed
    No
    Sample
    Representative
    Inferential approach
    Experimental
    Number of studies citing
    0
  • Tweeters that post fake news have more followers — and use more links — than those who don’t

    In this study, researchers seek to understand the difference between tweets containing fake news and those that don’t by analyzing their metadata. Specifically, they use a sample of more than 1.5 million viral tweets collected on the 2016 U.S. election day that used one of three different hashtags and/or mentioned Hillary Clinton or Donald Trump. Within that sample, they isolated which tweets went viral by counting the retweets of each in comparison to the whole — only .01 percent went viral and 10 percent of those contained fake news. What the authors found was that, in tweets containing fake news, accounts were more likely to be unverified, have more followers, use less mentions, support Trump and tweet more links. The authors posit that their findings could help technology companies and other researchers develop ways to automatically block misinformation on Twitter.

    Study Title
    Characterizing Political Fake News in Twitter by its Meta-Data
    Study Publication Date
    Study Authors
    Julio Amador, Díaz López, Axel Oehmichen, Miguel Molina-Solana
    Peer Reviewed
    No
    Sample
    Representative
    Inferential approach
    Experimental
    Number of studies citing
    0
 
Email IconGroup 3Facebook IconLinkedIn IconsearchGroupTwitter IconGroup 2YouTube Icon