Is expert crowdsourcing the solution to health misinformation?

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

Two new health fact-checkers

When HealthNewsReview.org announced it was shutting down in December, it went mostly unnoticed.

Almost nobody tweeted about the closure of the outlet, which had been debunking bogus health claims for 13 years. An email to the IFCN listserv highlighting HealthNewsReview’s demise got no responses. And the move went uncovered by most major media outlets.

But HealthNewsReview was one of the few fact-checking projects out there whose stated goal was to debunk false claims about health — and it was doing so at a time when bogus articles, memes and videos promoting stuff like anti-vaccine conspiracies were rampant on platforms like Facebook and YouTube. Not to mention the lack of public service health journalism.

“There is a Grand Canyon-sized gap between the kind of information that patients and consumers need and what they’re actually getting in most health care news stories and PR news releases,” wrote Gary Schwitzer, founder and publisher of HealthNewsReview, in his goodbye column.

Now, two new fact-checking projects aimed specifically at debunking false claims about health have emerged — and they’re relying on crowdsourced expertise.

The first is HealthFeedback.org, a fact-checking site that asks experts to review factual claims about health. Launched in the fall, HealthFeedback is an outgrowth of ClimateFeedback.org, a fact-checking project now overseen by the nonprofit organization Science Feedback, that debunks false claims about the climate and annotates news articles about science to highlight where they deviate from facts.

Each of the fact-checkers at HealthFeedback holds a doctorate and has been recently published in a peer-reviewed journal, according to its site. Scientists can apply to review claims for the site, which publishes its work in the form of a ratings-free, in-depth fact check.

That process, in which verified scientists review scientific claims online, is essentially elevated crowdsourcing. And HealthFeedback isn’t the only platform to take that approach.

Last April, three scientists built a prototype for what would become Metafact. The idea was to get verified scientists to answer readers’ questions about health claims. Then, the platform would display the degree to which those experts reached a consensus about the question.

Since then, the platform has verified more than 11,000 experts from 555 institutions around the world. Similar to HealthFeedback, experts have to be scientists, medical doctors, engineers and researchers who have published in a recent peer-reviewed journal, according to Metafact’s site.

Metafact has answered questions ranging from “Is gluten unhealthy?” (94 percent of experts said no) to “Is biological aging inevitable?” (72 percent said yes). Now it’s taken to Kickstarter and Patreon to fund the beta version of the platform, which will hire part-time science editors and create a membership program.

As a fact-checking method, crowdsourcing doesn’t have a long history of success. Past Wikipedia-style efforts have typically struggled to incentivize their users and build a community. But crowdsourcing fact checks from experts who are certified in a specific subject matter could yield more promising results — particularly if that work is amplified by partnering with tech platforms.

Correction: This piece has been updated to correct the home of  ClimateFeedback.org. We regret the error.

…technology

  • YouTube announced that it’s testing a new feature in India that displays fact checks alongside search results for sensitive topics, BuzzFeed News reported. The feature works by pulling relevant articles from the Schema.org ClaimReview markup, which is essentially a few lines of code that fact-checkers embed in their stories to get them picked up by Google. YouTube told Daniel that it plans to roll the feature out to new countries in 2019.

  • The Verge reported that Facebook has a plan to remove groups and pages that spread anti-vaccine misinformation from its recommendations — which have been proven to shape users’ beliefs. Facebook will not remove such groups and pages outright, as it does with false accounts. The move comes after weeks of pressure from both the public and American lawmakers for the company to do something about antivaxxer content, which is popular worldwide.

  • Also jumping on the anti-anti-vaccine bandwagon, Amazon this week announced that it had removed books promoting autism cures and antivaxxer propaganda. NBC News reported that the move came after a report from Wired that found the platform had hosted medically dubious books offering bogus cures for a variety of diseases.

…politics

  • PEN America has released a report on how misinformation tactics are being normalized as campaign strategies as the United States gears up for another presidential election in 2020. PEN found that among the biggest threats are micro-targeting and fake accounts. The report also recommends that tech platforms voluntarily take up some combination of human and automated moderation to weed out bogus content.

  • Experts see a shift in how Russian internet trolls are seeking to disrupt the 2020 elections. To get around protections put in place by social media companies to find fake content, they’re using fake accounts to amplify existing content, Bloomberg reports. Also in Russia, the government has banned the sharing of “false information of public interest, shared under the guise of fake news,” the BBC reported.

  • Inspired by The Washington Post Fact Checker’s ongoing guide to all of President Donald Trump’s false or misleading statements, Aos Fatos has launched a running tally of similar falsities from Brazilian President Jair Bolsonaro. Meanwhile, on CNN, Victor Blackwell is using gumballs to visualize how many false claims Trump has made.

…the future of news

  • If an artificial intelligence language model can be used to write stories, could it also be used to detect non-human written ones? Maybe, according to MIT’s Technology Review, though one rigorous test gave reason for doubt. Meanwhile, a joint initiative of MIT and Harvard gave $275,000 to projects that are using AI to combat misinformation.

  • Much ado has been made about Facebook’s partnership with fact-checking outlets. And while fact checks struggle to scale on the platform, the project has at least changed the way that misinformation is produced. Conversely, it’s changed the way that fact-checkers approach satire vs. misinformation.

  • Remember last week when we said that Facebook’s pivot to privacy and encryption could spell bad news for those wishing to counter misinformation? Politico put those concerns into words, writing that encrypting everyone’s messages “will undermine efforts worldwide to tackle misinformation.”

Each week, we analyze five of the top-performing fact checks on Facebook to see how their reach compared to the hoaxes they debunked. Here are this week’s numbers.

  1. Agência Lupa: “It is false that this reporter has said that he intends to ‘ruin Flávio Bolsonaro and the government'” (Fact: 23.6K engagements // Fake: 78.6K engagements)

  2. Les Décodeurs: “Female crew, film turbulence, fake photos: three intox on Ethiopian Airlines crash” (Fact: 4.4K engagements // Fake: 1.8K engagements)

  3. Factcheck.org: “Meme Fabricates Ocasio-Cortez Firing” (Fact: 3.3K engagements // Fake: 372 engagements)

  4. Faktisk: “No, MDG will not replace disabled private cars with buses”(Fact: 1K engagements // Fake: 28.2K engagements)

  5. Boom Live: “Viral Posts Falsely Claim Men Who Attacked Kashmiris Are Congress, SP Members” (Fact: 363 engagements // Fake: 545 engagements)

Not long after the crash of Ethiopian Airlines Flight 302, a video appeared on Facebook that purportedly showed passengers and crew members inside the plane’s cabin before its demise. It’s believable enough — people are wearing oxygen masks and babies are crying. But, thanks to Africa Check, we know it wasn’t a video of the Boeing 737 Max 8 moments before it crashed.
What we liked: Verification experts can often use geolocation tools to indicate whether a video was shot at a specific location, but that was obviously not an option in this case. Africa Check debunked the video in two ways — by consulting experts and through crowdsourcing.
First, it pointed out that oxygen masks would not have deployed until the plane climbed to a certain altitude, which according to records, Flight 302 never hit.
Second, it used crowdsourcing on Twitter to further validate its conclusion. Respondents, including a defense analyst, pointed out that the plane in the video was a double-aisled aircraft, whereas the Max 8 has a single aisle.
The fact-checker also noted a tweet pointing out that the plane in the video appears to be flying at night, whereas Flight 302 was traveling in the morning.
  1. The IFCN is hiring its first-ever intern this summer! The program runs 10 weeks and pays $10,000. Apply by March 30. Speaking of jobs, Poynter is hiring for several other positions — including two multimedia reporters who will fact-check misinformation on Instagram for MediaWise.
  2. Big news: Former IFCN director Alexios Mantzarlis has a new job at TED! He has accepted a fellowship to spend about a year developing a way for the public to weigh in on information quality on digital platforms.
  3. Full Fact is also hiring! The British fact-checking outlet is looking for a journalist to join their growing team. Apply by March 18.
  4. WhatsApp is *finally* testing a feature that lets users conduct a reverse Google image search within the app.
  5. In Indonesia, so-called “buzzer” teams are using fake social media profiles to drum up hype and promote propaganda supporting both presidential candidates.
  6. Joy Behar, host of ABC’s “The View,” seemed to entertain a conspiracy theory that Melania Trump had a body double during the president and first lady’s recent trip to a tornado-stricken region in Alabama. The White House quickly answered, and the president blamed the mainstream media, calling them “fake news.”
  7. Time’s interview with internet pioneer Tim Berners-Lee touches on fake news and misinformation, as well as the web’s evolution in the 30 years since his original vision.
  8. It’s not news that Russian (Soviet, in this case) disinformation existed before the Internet, but a 1982 TV Guide article dug up by CNN shows that pre-web strategies were not too different from today’s.
  9. Snopes has exempted itself from the Internet Archive’s Wayback Machine. 🤔
  10. Christiaan Triebert, formerly a digital investigator and trainer at Bellingcat, has joined The New York Times’ visual investigations unit.
That’s it for this week. Feel free to send feedback and suggestions to factchecknet@poynter.org.

Daniel and Susan