August 22, 2019

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

Can warnings from fact-checkers reduce sharing?

The misinformation expert Claire Wardle, writing in the current issue of Scientific American, poses (then expertly answers) a key question for people concerned about the current state of the online information ecosystem: Why do people share misinformation, conspiracies and other kinds of misleading content on social media?

(The article is part of a larger package dedicated to “Truth, Lies and Uncertainty.”)

Wardle, who is the U.S. director of First Draft, a nonprofit focused on ways to address misinformation, cites several reasons people create this stuff, many of which will be familiar to readers of this newsletter. People may want political influence, or they are just causing trouble. Some of them do it for money.

As for sharing, one of Wardle’s points is that people’s willingness to “share without thinking” is precisely what the creators of disinformation want. “The goal is that users will use their own social capital to reinforce and give credibility to that original message,” she wrote.

So when it comes to sharing, what can be done to give people pause?

One answer came recently from Paul Mena, a professor of journalism and news writing at the University of California, Santa Barbara. He released new research that provides some affirming news for fact-checkers. Mena concluded that people were less likely to share content on Facebook that includes a fact-checking warning label than stories that are not flagged.

In the experimental design, some posts were labeled as “disputed,” similar to the way Facebook used to tagged posts rated as false by fact-checkers who are part of Facebook’s fact-checking partnership. The platform tweaked its flags in late 2017; now they show fact checks as related articles and a flag also appears as a user is about to share it. Facebook then also limits the content’s reach in the News Feed. (Disclosure: Being a signatory of the IFCN’s code of principles is a necessary condition for joining the project.)

Mena’s study was based on a sample of 501 participants from across the political spectrum who were asked  about whether they would share certain kinds of content on Facebook.

”The study showed that respondents who saw a fabricated Facebook post with a warning label had lower intentions to share that content than those who did not see the flag,” his report said. The effect of these sharing intentions, notably, remained the same even after Mena controlled for the participants’ political leaning.

Mena, asked in a phone interview about potential practical applications for his research beyond Facebook, said that the conclusions could possibly be tested on other platforms.

In fact, such flags are about to get a real-world test on a new platform. Instagram announced last week that it, like its owner Facebook, would be using the third-party fact-checking program to check posts on the photo and video-sharing platform, which is brimming with misleading memes and other false information.

The Instagram effort could provide researchers with important data on the effectiveness of flagging memes, which Mena and other researchers say is needed given that memes spread differently than text articles. As Wardle noted in her Scientific American piece, “memes have not been acknowledged by much of the research and policy community as influential vehicles for disinformation, conspiracy or hate” but their shareability is what helps them spread — and contributes to their effectiveness.

Many of the details of how Instagram will work with fact-checkers are still being worked out, as Cristina wrote last week when the news came out.

One big question is how the project will scale. As Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensics Research Lab told Wired’s Sara Harrison, the effort will add 100 million new users to the fact-checking effort, and “fact-checkers have to sleep.”

. . . technology

  • Twitter and Facebook suspended hundreds of accounts the companies say were part of a Chinese effort to undermine pro-democracy protests in Hong Kong. The accounts amplified contentthat portrayed the protesters as violent in an effort to sow political discord. Twitter said it will no longer allow state-supported media outlets to promote tweets.

  • In a tweet, U.S. President Donald Trump aired a popular conservative talking point: that Google is biased in favor of liberals. Both PolitiFactand The New York Times debunked the tweet, which claimed a report found that the tech giant manipulated millions of votes in favor of former Secretary of State Hillary Clinton in 2016.

  • Have you ever seen those pages on Instagram that claim to post interesting facts? Writing for Poynter.org, MediaWise reporters Alex Mahadevan and Madelyn Knight reported that “most of these pages are full of suspect, out-of-context or downright false claims and have millions of combined followers.”

. . . politics

  • After publishing a story about a pro-Russia politician, fact-checkers in Ukraine were attacked by several websites and TV stations. Vox Ukraine wrote that such attacks attempted to discredit the outlet’s reporting, which found several factual errors in a speech by Viktor Medvedchuk.

  • A social scientist in Germany has warned that conspiracy theorists have hijacked some climate-related terms to spread misinformation on YouTube, Science News reported. Joachim Allgaier of RWTH Aachen University found that common search terms like “climate change” and “global warming” typically led to accurate videos. But newer terms like “geoengineering” and “climate modification” led to conspiratorial videos.

  • The U.S. Food and Drug Administration has issued an advisory that warns people against drinking bleach to treat autism or cancer. That warning comes amid a flurry of online misinformation that falsely markets bleach as a medical cure. Meanwhile, in the United Kingdom, the government is calling for social media companies to more seriously combat anti-vaccine misinformation.

. . . the future of news

  • A new study from the Global Disinformation Indexreported by CNN ahead of its September release, concludes that at least $235 million in revenue is generated annually from ads running on extremist and disinformation websites. Big brand names could unwittingly have their ads next to the people running sites propagate hate or false information, the report said.

  • There are certain linguistic characteristics to fake news that could be used by machines to detect misinformation, linguist and software engineer Fatemeh Torabi Asr wrote in Nieman Lab this week. On average, she said, “fake news articles use more expressions that are common in hate speech, as well as words related to sex, death, and anxiety.”

  • Snopes has been embroiled in a very public feud with the Babylon Bee for debunking the Christian satire site’s articles. But despite the criticism, researchers writing for Nieman Lab have found that a lot of people still don’t know how to identify satire — and that fact-checking could provide some much-needed clarity.

When Norway decided to follow Germany’s decision to suspend its donations to the Brazilian government’s Amazon Fund, the Oslo government became a clear target for President Jair Bolsonaro’s attacks.

In a recent interview about the Norwegian cut of $33 million — amid a dispute around deforestation — the right-wing politician asked: “Isn’t Norway that country that kills whales up there in the North Pole?” Some hours later, Bolsonaro posted on his Twitter account a video showing a whale hunt with captions that said it was recorded in Norway.

The fact-checking outlet Agência Lupa, however, debunked that. The video shared by Bolsonaro had gone viral on the internet before and fact-checkers had already covered it. It was actually shot in Denmark, during an annual festival called Grindadráp. There is no connection at all between those images and Norway.

But Lupa went even deeper. It reached the International Whale Commission and obtained the most recent data on whale hunting. In 2017, Norway killed 432 whales, the lowest number since 1996.

What we liked: This fact check was picked up by all major media outlets in Brazil, like Folha de S.Paulo, and also reached foreign media, like Deutsche Welle. It not only debunked the video but presented data on a controversial subject (whale hunting). Twitter users went back to Bolsonaro’s account to demand a correction — which he hasn’t done.

  1. Africa Check is now fact-checking Facebook posts in 11 African languages.

  2. Pagella Politica, in Italy, and Newtral, in Spain, have been debunking false images and videos about Open Arms, a vessel with more than 100 migrants stranded off the coast of Italy. Even the U.S actor Richard Gere got involved.

  3. A new study found that summary fact-checking (think speaker files) had more of an effect on how politicians are viewed than individual fact checks.

  4. PolitiFact looked into the longstanding theory that psychiatric drugs influence mass shooters. It found that there is no scientific evidence to suggest that’s true — and the theory has connections to the Church of Scientology.

  5. In India, the Economic Times reported on how a propaganda arm of the Pakistani government used fake Twitter accounts to spread disinformation about the situation in Kashmir.

  6. Facebook is hiring journalists to curate its new News Tab, a section that will surface relevant news content for users.

  7. Twitter has invested in a social media network that’s been accused of facilitating the spread of misinformation in India.

  8. A new privacy hoax is going around Instagram, as Atlantic (soon to be New York Times) writer Taylor Lorenz flagged on Twitter.

  9. The ownership of The Epoch Times is closely associated with the Chinese spiritual community Falun Gong, NBC revealed this week. The publication is a big supporter of Donald Trump and also “a powerful conduit for the internet’s fringier conspiracy theories,” wrote Ben Collins and Brandy Zadrozny.

  10. IFCN’s fellows will be announced this Friday! The IFCN received 12 applications and interviewed 5 finalists. Two of them will spend some time embedded in another fact-checking organization this semester. Follow @factchecknet.

That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org.

DanielSusan and Cristina

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News

Comments

Comments are closed.

  • **A new study found that summary fact-checking (think speaker files) had more of an effect on how politicians are viewed than individual fact checks.**

    Do I need to point out the obvious?

    That study should only highlight concerns raised by Will Moy (Full Fact) and others before him (including the author of this email) that the rating scales used by fact checkers ultimately serve to undermine their credibility.

    Think about it. The study suggests the aspect of fact-checking most likely to influence elections is the part described by its practitioners as “a gimmick” (Kessler) and “entirely subjective” (Adair).

    These fact checkers have constructed a system that extracts their opinions from the fact-check material and amplifies those opinions in a way that influences voters.

    That’s great if the point of fact-checking is to surreptitiously allow “objective” journalists to use their opinions to influence voters.

    But that’s the not the point of fact-checking, is it?