The study reviewed web traffic collected with consent from a national sample of 2,525 Americans between Oct. 4 and Nov. 7, 2016. Fake news websites were found reach a relatively large audience, equivalent to 27.4 percent of the sample, with fact-checking websites close behind at 25.3 percent. These two groups overlap only in part, as 13.3 percent of the sample visited fake news websites but not fact-checking websites. Moreover, none of the users who saw a specific fake news story was then reached by its related fact check. The study also found that Facebook was a key channel for misinformation to spread, likely accounting for about one fifth of traffic to fake news websites.
This paper, presented at the International Conference on Asian Digital Libraries, aims to uncover the types of rumors and "counter-rumors" (or debunks) that surfaced on Twitter following the falsely reported death of former Singaporean Prime Minister Lee Kuan Yew. Researchers analyzed 4,321 tweets about Lee's death and found six categories of rumors, four categories of counter-rumors and two categories belonging to neither. With more counter-rumors than rumors, the study's results suggest that Twitter users often attempt to stop the spread of false rumors online.
The article’s findings are based on survey data collected from two insurgency-affected areas: southern Thailand and Mindanao, Philippines. The more respondents felt in danger, were repeatedly exposed to a rumor and/or saw one that coincided with their preconceived beliefs, the more likely they were to believe it. That goes against widely held notions that psychology is the end-all, be-all when it comes to whether or not someone believes unverified information. The study contains an interesting precaution for those working on dispelling rumors and misinformation around the world.
This study measures the extent to which algorithms and comments on Facebook that link to fact checks can effectively correct users' misconceptions about health news. Researchers tested this by exposing 613 survey participants to simulated news feeds with three condition. Participants were shown misinformation about the Zika virus and different corrective news stories either surfaced by algorithm or posted by another Facebook user. The experimental results found that algorithmic and social distribution of fact checks were equally effective in limiting participants' misperceptions — even for people who are more inclined to believe conspiracy theories. Researchers conclude that this is likely because breaking health news events often deal with new phenomena, which allows for great receptivity to comments and the possibility of opinion change among news consumers early on.
This study of eight experiments aims to measure how social presence affects the way that people verify information online. It found that, when people think they're being judged by a large group of people online, they're less likely to fact-check claims than when they're alone. Inducing vigilance correlated with an increase in fact-checking among respondents, which could imply that, when they're in a group of people, social media users tend to let their guards down. That finding held across a variety of different conditions, including statements that were politically charged and neutral, simulated forums and social media, as well as small vs. large group sizes.
This study looks at the effect of partisanship on the likelihood of accepting a factual correction. In two separate studies, four true and four false claims by Donald Trump were presented to a sample of Democrats, non-Trump supporting Republicans and Trump-supporting Republicans. The researchers found that (a) attributing a claim to Trump made his supporters believe it more (b) correcting a Trump falsehood made *all* respondents believe it less, regardless of their political preferences and (c) that the corrections had no effect on voting preferences.
This study experiments with a feature that lists related stories underneath existing posts on Facebook in order to determine whether social media helps reinforce or correct users' misperceptions. In a web-based survey with 524 people recruited from a university, participants viewed separate screens of Facebook user news feeds in which they were presented with posts whose related articles all confirmed the misperception, all refuted the misperception or a mixed condition. Stories focused on attitudes toward GMOs and illness, as well as attitudes toward vaccination and autism. The experimental results suggest that attitudes based on misperceptions about GMOs can be changed via exposure to corrective information on social media. Interestingly, researchers also that, while people decreased their evaluations of related news stories when they contradicted pre-existing beliefs, those stories still changed some attitudes among those who believed the misperception.