Twitter

Poynter Results

  • How four of the biggest tech platforms spread fake news during the 2016 U.S. election

    In this comprehensive law practicum, student researchers at Stanford University surveyed the ways in which Facebook, Google, Twitter and Reddit helped facilitate the spread of fake news during the 2016 U.S election. They divided their report into separate sections for each platform, drawing upon user experiments, search analyses, interviews with major news organizations and current and former government officials and a review of steps already taken to address misinformation. For Facebook, researchers recommended the platform continue investing in its fact-checking partnerships to cut down on fake news readership. For Google, authors recommend implementing more effective algorithmic monitoring to avoid surfacing hoaxes. For Twitter, researchers recommended the platform pilot a crowd-sourced fact-checking and flagging system to decrease the spread of fake news links. Finally, for Reddit, the authors said the platform should work to decrease the visibility and reach of subreddits that are known to regularly foster conspiracy theories.

    Study Title
    Fake News and Misinformation: The Roles of the Nation’s Digital Newsstands, Facebook, Google, Twitter and Reddit
    Study Authors
    Jacob Finkel, Steven Jiang, Mufan Luo, Rebecca Mears, Danaë Metaxa-Kakavouli, Camille Peeples, Brendan Sasso, Arjun Shenoy, Vincent Sheu, Nicolás Torres-Echeverry
    Journal
    Stanford Law School Fake News and Misinformation Policy Lab Practicum
    Peer Reviewed
    No
    Sample
    Representative
    Number of studies citing
    0
  • Tweeters that post fake news have more followers — and use more links — than those who don’t

    In this study, researchers seek to understand the difference between tweets containing fake news and those that don’t by analyzing their metadata. Specifically, they use a sample of more than 1.5 million viral tweets collected on the 2016 U.S. election day that used one of three different hashtags and/or mentioned Hillary Clinton or Donald Trump. Within that sample, they isolated which tweets went viral by counting the retweets of each in comparison to the whole — only .01 percent went viral and 10 percent of those contained fake news. What the authors found was that, in tweets containing fake news, accounts were more likely to be unverified, have more followers, use less mentions, support Trump and tweet more links. The authors posit that their findings could help technology companies and other researchers develop ways to automatically block misinformation on Twitter.

    Study Title
    Characterizing Political Fake News in Twitter by its Meta-Data
    Study Publication Date
    Study Authors
    Julio Amador, Díaz López, Axel Oehmichen, Miguel Molina-Solana
    Peer Reviewed
    No
    Sample
    Representative
    Inferential approach
    Experimental
    Number of studies citing
    0
  • For debunking viral rumors, turn to Twitter users

    This paper, presented at the International Conference on Asian Digital Libraries, aims to uncover the types of rumors and "counter-rumors" (or debunks) that surfaced on Twitter following the falsely reported death of former Singaporean Prime Minister Lee Kuan Yew. Researchers analyzed 4,321 tweets about Lee's death and found six categories of rumors, four categories of counter-rumors and two categories belonging to neither. With more counter-rumors than rumors, the study's results suggest that Twitter users often attempt to stop the spread of false rumors online.

    Study Title
    An Analysis of Rumor and Counter-Rumor Messages in Social Media
    Study Publication Date
    Study Authors
    Dion Hoe-Lian Goh, Alton Y.K. Chua, Hanyu Shi, Wenju Wei, Haiyan Wang, Ee Peng Lim
    Journal
    Conference paper
    Peer Reviewed
    Yes
  • The most effective way to fact-check is to create counter-messages

    Researchers examined a final selection of 20 experiments from 1994 to 2015 that address fake social and political news accounts in order to determine the most effective ways to combat beliefs based on misinformation. The headline finding is that correcting misinformation is possible, but it's often not as strong as the misinformation itself. The analysis has several take-aways for fact-checkers, most notably the importance of creating counter-messages and alternative narratives if they want to change their audiences’ minds and getting on to the correction as quickly as possible.

    Study Title
    Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation
    Study Publication Date
    Study Authors
    Man-pui Sally Chan, Christopher R. Jones, Kathleen Hall Jamieson, Dolores Albarracín
    Journal
    Psychological Science
    Peer Reviewed
    Yes
 
Email IconGroup 3Facebook IconLinkedIn IconsearchGroupTwitter IconGroup 2YouTube Icon