Social Media

Poynter Results

  • Fact-Checking

    Article

    Fake Miami Herald screenshots are stoking fears of more school threats

    First, it was imposter tweets. Now, someone is making fake screenshots of a Miami Herald story about school threats following last week’s mass shooting.

    Monique O. Madan first noticed them on Monday. One parent reached out asking if W.R. Thomas Middle School in Miami-Dade County was actually under threat, which she saw in a story bearing Madan’s byline.

    Thinking it was an isolated incident, she said no and moved on.

  • Ethics

    Article

    Best practices for reporting through social media during a mass shooting

    With each new mass shooting or terrorist attack, the norms of reporting and publishing information about them seem to shift in dramatic ways.

    We know the reason: smartphones and social media. The combination means information is coming from all sides, fast and furious — and too often wrong or misleading.

  • Fake news reaches less people online than most assume

    In one of the first quantifications of fake news in Europe, the authors found that, in France and Italy, users generally spend less time on selected fake news websites than they do on those of genuine media outlets. The report, which analyzed popular fake news sites identified by fact-checking organizations using comScore and CrowdTangle, found that mainstream news organizations accrue significantly more time spent on their stories than fake news outlets. But on Facebook, the situation is a little less clear; researchers found that the interactions generated by a small number of fake news met or exceeded those generated by the most popular news brands in France and Italy.

    Study Title
    Measuring the reach of "fake news" and online disinformation in Europe
    Study Publication Date
    Study Authors
    Richard Fletcher, Alessio Cornia, Lucas Graves, Rasmus Kleis Nielsen
    Journal
    Reuters Institute for the Study of Journalism
    Peer Reviewed
    No
    Sample
    Representative
    Inferential approach
    Experimental
  • Fact-Checking

    Article

    This Turkish fact-checker turned 7,628 messages from readers into a report

    In 2016, the editor of a debunking organization wanted to give people a way to submit and check questionable news on the internet.

    "From the beginning of Teyit.org, we have collected all data which circulates on our networks," said Mehmet Atakan Foça, editor-in-chief of the Turkish debunking outfit, in a message to Poynter. "We aim to create baseline data to understand what should Teyit.org change."

  • For debunking viral rumors, turn to Twitter users

    This paper, presented at the International Conference on Asian Digital Libraries, aims to uncover the types of rumors and "counter-rumors" (or debunks) that surfaced on Twitter following the falsely reported death of former Singaporean Prime Minister Lee Kuan Yew. Researchers analyzed 4,321 tweets about Lee's death and found six categories of rumors, four categories of counter-rumors and two categories belonging to neither. With more counter-rumors than rumors, the study's results suggest that Twitter users often attempt to stop the spread of false rumors online.

    Study Title
    An Analysis of Rumor and Counter-Rumor Messages in Social Media
    Study Publication Date
    Study Authors
    Dion Hoe-Lian Goh, Alton Y.K. Chua, Hanyu Shi, Wenju Wei, Haiyan Wang, Ee Peng Lim
    Journal
    Conference paper
    Peer Reviewed
    Yes
  • Labeling some fake stories on social media increases the believability of untagged fake stories

    This study examines the effects of adding disputed labels to fake news stories on social media outlets like Facebook, in line with the real partnership the social network launched in December 2016. While researchers found that adding warnings to fake content decreased those posts perceived accuracy, they also found that the presence of fake news tags alone also increased the perceived accuracy of untagged fake stories. This "implied truth" effect was stronger among subgroups who were more likely to believe online information (such as young adults and Trump supporters). Participants saw an equal mix of right and left-wing headlines, both fake and real, and answered questions about their validity and shareability.

    Study Title
    The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings
    Study Publication Date
    Study Authors
    Gordon Pennycook, David G. Rand
    Journal
    SSRN
    Peer Reviewed
    No
  • Low critical thinking may determine whether you believe in fake news

    Respondents were showed "Facebook-like" posts carrying real or fake news. Across three different study designs, respondents with higher results on a Cognitive Reflection Test were found to be less likely to incorrectly rate as accurate a fake news headline. Analytic thinking was associated with more accurate spotting of fake and real news independent of respondents' political ideology. This would suggest that building critical thinking skills could be an effective instrument against fake news.

    Study Title
    Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity
    Study Publication Date
    Study Authors
    Gordon Pennycook, David G. Rand
    Journal
    SSRN
    Peer Reviewed
    No
  • The most effective way to fact-check is to create counter-messages

    Researchers examined a final selection of 20 experiments from 1994 to 2015 that address fake social and political news accounts in order to determine the most effective ways to combat beliefs based on misinformation. The headline finding is that correcting misinformation is possible, but it's often not as strong as the misinformation itself. The analysis has several take-aways for fact-checkers, most notably the importance of creating counter-messages and alternative narratives if they want to change their audiences’ minds and getting on to the correction as quickly as possible.

    Study Title
    Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation
    Study Publication Date
    Study Authors
    Man-pui Sally Chan, Christopher R. Jones, Kathleen Hall Jamieson, Dolores Albarracín
    Journal
    Psychological Science
    Peer Reviewed
    Yes
  • Twitter users are more likely to accept correction from people they know

    The study looked at corrections made on Twitter between January 2012 and April 2014 to see how fact-checking is received by people with different social relationships. Researchers ultimately isolated 229 “triplets” where the person sharing a falsehood responds to a correction by a second tweeter. Corrections made by “friends” resulted in the person sharing a falsehood accepting the fact 73 percent of the time. Corrections made by strangers were accepted only 39 percent of the time. Put simply: When we’re wrong on Twitter, we’re more likely to own up to it if someone we know corrected us.

    Study Title
    Political Fact-Checking on Twitter: When Do Corrections Have an Effect?
    Study Publication Date
    Study Authors
    Drew B. Margolin, Aniko Hannak, Ingmar Weber
    Journal
    Political Communication
    Peer Reviewed
    Yes
 
Email IconGroup 3Facebook IconLinkedIn IconsearchGroupTwitter IconGroup 2YouTube Icon