In one of the first quantifications of fake news in Europe, the authors found that, in France and Italy, users generally spend less time on selected fake news websites than they do on those of genuine media outlets. The report, which analyzed popular fake news sites identified by fact-checking organizations using comScore and CrowdTangle, found that mainstream news organizations accrue significantly more time spent on their stories than fake news outlets. But on Facebook, the situation is a little less clear; researchers found that the interactions generated by a small number of fake news met or exceeded those generated by the most popular news brands in France and Italy.
This paper, presented at the International Conference on Asian Digital Libraries, aims to uncover the types of rumors and "counter-rumors" (or debunks) that surfaced on Twitter following the falsely reported death of former Singaporean Prime Minister Lee Kuan Yew. Researchers analyzed 4,321 tweets about Lee's death and found six categories of rumors, four categories of counter-rumors and two categories belonging to neither. With more counter-rumors than rumors, the study's results suggest that Twitter users often attempt to stop the spread of false rumors online.
This study examines the effects of adding disputed labels to fake news stories on social media outlets like Facebook, in line with the real partnership the social network launched in December 2016. While researchers found that adding warnings to fake content decreased those posts perceived accuracy, they also found that the presence of fake news tags alone also increased the perceived accuracy of untagged fake stories. This "implied truth" effect was stronger among subgroups who were more likely to believe online information (such as young adults and Trump supporters). Participants saw an equal mix of right and left-wing headlines, both fake and real, and answered questions about their validity and shareability.
Respondents were showed "Facebook-like" posts carrying real or fake news. Across three different study designs, respondents with higher results on a Cognitive Reflection Test were found to be less likely to incorrectly rate as accurate a fake news headline. Analytic thinking was associated with more accurate spotting of fake and real news independent of respondents' political ideology. This would suggest that building critical thinking skills could be an effective instrument against fake news.
Researchers examined a final selection of 20 experiments from 1994 to 2015 that address fake social and political news accounts in order to determine the most effective ways to combat beliefs based on misinformation. The headline finding is that correcting misinformation is possible, but it's often not as strong as the misinformation itself. The analysis has several take-aways for fact-checkers, most notably the importance of creating counter-messages and alternative narratives if they want to change their audiences’ minds and getting on to the correction as quickly as possible.
The study looked at corrections made on Twitter between January 2012 and April 2014 to see how fact-checking is received by people with different social relationships. Researchers ultimately isolated 229 “triplets” where the person sharing a falsehood responds to a correction by a second tweeter. Corrections made by “friends” resulted in the person sharing a falsehood accepting the fact 73 percent of the time. Corrections made by strangers were accepted only 39 percent of the time. Put simply: When we’re wrong on Twitter, we’re more likely to own up to it if someone we know corrected us.