July 16, 2020

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here

The power of the pause

A couple weeks ago, the United Nations announced a new initiative called “Pause,” aimed at getting people to stop and think about what they’re sharing about COVID-19 on social media. The campaign is accompanied by the hashtag #takecarebeforeyoushare.

It’s hard to know how effective such campaigns will be in stemming the spread of the worldwide “infodemic” of fake cures and other falsehoods about the virus. But a new study from researchers at the University of Regina in Canada and the Massachusetts Institute of Technology at least backs up the initiative’s premise — the notion that it’s important to get people to take a breath and assess the accuracy of what they’re about to share.

 

The study, “Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention,” published in Psychological Science June 30, concluded that people share false claims about COVID-19 “partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share.”

The study had two parts. In the first, 856 people were divided into two groups, and all were shown some truthful and some false headlines about COVID-19. One group was asked about the accuracy of headlines and the other was asked how likely they were to share the headlines. The idea was to find out how discerning people are when they are judging the accuracy of a post as opposed to when they are deciding whether to share it. Their levels of discernment, or the difference between their responses to true headlines and false headlines, were higher when judging accuracy than when sharing.

In the second part, another 853 people were divided into two groups. Both groups were asked whether they would share the headlines. But before that, one group was asked to rate the veracity of a headline that didn’t have anything to do with COVID-19. That latter group’s discernment level was three times higher.

The results, along with similar previous studies, show that “nudging” people to think about truthfulness might improve their judgment about what to share on social media, said Gordon Pennycook, an assistant professor of behavioral science at the University of Regina, who co-authored the paper.

The study says its findings “point to a suite of interventions based on accuracy nudges that social media platforms could directly implement.” Pennycook, reached by phone this week, was hesitant to speculate on what kinds of accuracy nudges would work best but said he was confident that social media companies “could easily implement and optimize interventions that share similar principles.”

One factor, he said, is the context of social media itself. The fact that inattention or distraction is a primary reason people share misinformation on social media may reflect the mindset people have when using these platforms. People go to social media for relaxation and validation from friends. On platforms where news is mixed with baby pictures and animal videos, people are distracted from accuracy, he said, even though other research shows people do care about being accurate.

People don’t tend to be very reflective on social media, he said. “They go to Facebook to take a break from thinking.”

– Susan Benkelman, API

. . . technology

  • Research by the Reuters Institute for the Study of Journalism found that 21% of conversations on Twitter about COVID-19 or the World Health Organization were toxic.
    • The study defined toxic conversations as, “a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.”
    • Study authors said more research was needed to determine the causes of toxicity and their impact on public opinions.
  • CNN’s Marshall Cohen looked at why Twitter in late May labeled President Donald Trump’s tweets about mail-in voting as potentially misleading but has not put labels on more recent tweets that appear to be similarly misleading.
    • “The distinction Twitter is drawing is that there’s a difference between questioning the integrity of mail-in voting as a broad concept, versus suggesting that voting procedures in a particular state are fraudulent,” Cohen wrote. He said the platform’s approach creates a “strange dynamic.”

. . . politics

  • The Washington Post reported that its catalog of falsehoods from President Trump reached the 20,000 mark as of July 9. The newest numbers, the Post reported, show an average of 23 false claims a day over the past 14 months.
    • The coronavirus pandemic has “spawned a whole new genre of Trump’s falsehoods,” the Post’s fact-checkers wrote.
  • Doctors at Brazil’s public hospitals say they’re being pressured to hand out hydroxychloroquine, Bloomberg News reported, amid debates about the drug’s efficacy against COVID-19.

. . . science and health

  • A doctor in Michigan wrote for HuffPost that she sees the need for a new diagnostic code: Misinformation. Asha Shajahan, a primary care physician at Beaumont Health, said that she does myth-busting during nearly 90% of her video visits.
  • Kaiser Health News’ Julie Appleby has a good explainer on COVID-19 contact tracing, the subject of many conspiracy theories, and how it works.
    • “Misinformation abounds, from tales that people who talk to contact tracers will be sent to nonexistent ‘FEMA camps’ — a rumor so prevalent that health officials in Washington state had to put out a statement in May debunking it — to elaborate theories that the efforts are somehow part of a plot by global elites, such as the Clinton Foundation, Bill Gates or George Soros,” she wrote.

 

This week, LeadStories debunked multiple false claims about COVID-19 that popped up in a video made by Colorado physician Dr. Kelly Victory. The fact-check broke the video down chronologically, quoting Victory and linking to the timecode of each false claim.

Some of the claims included a false assertion that COVID-19 can’t survive warm temperatures, that young people aren’t impacted by the virus and that social distancing is not an established health care practice.

For her warm temperature claim, LeadStories linked to a World Health Organization mythbuster page that showed even warm weather countries are seeing outbreaks of the virus. For the claim about young people, LeadStories spoke to the director of Hawaii’s COVID-19 response who noted the high number of 30-year-olds contracting the virus. For the claim about social distancing, LeadStories linked to a Centers for Disease Control and Prevention article about using social distancing to protect former Presidents George W. Bush and Barack Obama.

What we liked: This fact-check not only breaks down Victory’s multiple claims one-by-one, but enables the reader to evaluate her claims for themselves by using timecode links to the original video. It’s also a lesson in how readers need to be aware of a source’s expertise; much of her advice is contrasted with guidance from recognized medical experts and peer-reviewed articles.

– Harrison Mantas, IFCN

  1. The New York Times reported on criticisms of Facebook’s approach to climate change disinformation.  
  2. In a press release Monday, the BBC announced that along with its fellow members of the Trusted News Initiative it would begin adding a digital watermark to its content to guard against disinformation. 
  3. Reuters reported that a British university student who wrote opinion columns critical of anti-Israeli surveillance activists was a fictional persona, and his photo was the product of deepfake technology. 
  4. Indian newswire IANS reported that Google will cooperate with South Korean authorities to monitor and combat the spread of misinformation on YouTube. 
  5. The Union of Concerned Scientists put out a guide to spotting and stopping COVID-19 disinformation.

A clarification from last week: An item in Quick Hits said that a flaw in SmartNews’ algorithm may have contributed to misinformation on Facebook about alleged antifa raids in southern Oregon. SmartNews points out that it was a flaw in the algorithm it uses to acquire new users, not the algorithm that determines how it distributes news.

That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org. And if this newsletter was forwarded to you, or if you’re reading it on the web, you can subscribe here. Thanks for reading.

Susan and Harrison

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Harrison Mantas is a reporter for the International Fact-Checking Network covering the wide world of misinformation. He previously worked in Arizona and Washington D.C. for…
Harrison Mantas

More News

Back to News