April 4, 2019

In India, WhatsApp is a major driver of misinformation.

On Monday, The Atlantic published an in-depth article about how bogus claims, rumors and propaganda have spread like wildfire on the platform, which has more than 200 million monthly active users in the country. The New York Times reported that platforms like Facebook are struggling to cope with the scale ahead of next week’s general election. Following February’s terrorist attack in the Kashmir region, fake images inundated social media users — and several mainstream news outlets reported them as if they were true.

On Tuesday, WhatsApp said it would partner with the software startup Proto to verify some of those bogus claims ahead of the election. And several news organizations pounced on the news.

“WhatsApp’s new tip line is a testing ground for fighting fake news on encrypted messaging,” The Washington Post reported.

“WhatsApp launches India tip line to curb fake news during polls,” Reuters wrote.

The use of tip lines has been a key strategy for combating misinformation on the encrypted messaging platform. WhatsApp users send potentially false information to fact-checking outlets’ institutional accounts, then fact-checkers will verify or debunk the claim and send the resulting article back to the user. In countries like Brazil, that process has worked in the past.

There’s just one problem: WhatsApp’s new partnership with Proto, called “Checkpoint,” isn’t aimed at fact-checking false claims at all — it’s a research project.

BuzzFeed News first reported that on Wednesday, citing an FAQ posted on Checkpoint’s website which states that the purpose of the tipline is primarily “to gather data for research, and not a helpline that will be able to provide a response to every user.”

“When we provide a response to a user, we are providing a verification about the reliability of the original claim,” the FAQ reads. “This is not as intensive as the fact-checking that a journalism organization can provide.”

In a press release sent to Poynter, the move was billed as a way for users to send potential rumors to a tip line on WhatsApp. Then, Checkpoint’s “verification center” would respond to let users know if the information is true, false, misleading, disputed or out of scope.

(Courtesy WhatsApp)

Fact-checkers have long lamented WhatsApp’s opacity; because of its encryption, it’s impossible to track what’s being shared. Not even the tech company’s own staff knows how prevalent misinformation is.

According to WhatsApp’s press release, Checkpoint will use the tip line submissions to build a database of rumors about the Indian election. That database will then be used to conduct research commissioned by the company. The entire process will be assisted by the media consulting firm Dig Deeper and Meedan, another software company that developed a content management platform that uses the WhatsApp Business API to pull requests directly into the system.

“The goal of this project is to study the misinformation phenomenon at scale — natively in WhatsApp,” said Proto founders Ritvvij Parrikh and Nasr ul Hadi in the press release. “As more data flows in, we will be able to identify the most susceptible or affected issues, locations, languages, regions and more.”

In India, that problem takes on a more insidious context.

Last year, dozens of civilians were killed in lynch mobs after rumors spread about them on WhatsApp. In response, the government has called on the platform to do more to counter misinformation. In addition to the hiring of a “grievance officer” in August and a head of India operations in November, Tuesday’s announcement was billed as one step in that direction.

“Getting this solution right in India will help us overcome some of the bandwidth challenges we’ve encountered in Brazil and we’re hoping that learnings here can aid fact-checking organizations in other countries,” said a WhatsApp spokesperson in an email to Poynter.

Misinformation is inciting violence around the world. And tech platforms don’t seem to have a plan to stop it.

But Checkpoint isn’t a fact-checking organization. When asked how the project will go about fact-checking potential misinformation, WhatsApp’s spokesperson sent Poynter Checkpoint’s FAQ.

When Poynter asked what methodology Checkpoint would use to verify claims, Fergus Bell, founder of Dig Deeper, sent a checklist that the project will use. That process includes steps like a reverse image search, looking for “evidence of manipulation” and the question “is the claim or statistic correct?”

Obviously, that’s much broader than the methodologies fact-checkers typically use to evaluate potentially false content. But that’s by design.

“This is not a fact-checking service, the primary focus is on research,” Bell said in an email.

When Poynter told Jency Jacob, managing editor of the Indian fact-checking site Boom Live, that Checkpoint would be doing the bulk of the verification work and not fact-checkers, he had a short response.

“Nonsense,” he said in a WhatsApp message.

Last month, Facebook held a meeting with its fact-checking partners in India. Jacob said that, during the meeting, Proto gave a presentation of the pipeline that it had developed to solicit and respond to rumor submissions on WhatsApp.

Jacob was under the impression that Checkpoint would send those queries to fact-checkers for verification and then share the resulting article. But the company’s FAQ says differently — and the project is already under fire for not scaling to the amount of misinformation on WhatsApp.

In its FAQ, Proto said that it may take up for 24 hours for the company to get back to users who submit a potentially false or misleading piece of content. But Wall Street Journal reporter Newley Purnell has waited more than 30 hours for a response after submitting several messages to the tip line.

“So journalists in India are trolling them,” Jacob said. “None of the queries have got a reply.”

Aside from the tip line, WhatsApp has taken other piecemeal approaches to limit the spread of misinformation ahead of the Indian election, which is scheduled to begin on April 11 and end on May 23.

On Wednesday, Techcrunch reported that the company added a new privacy feature that lets users control who has permission to add them to groups. Last month, WhatsApp announced that it was testing a feature that lets users do a reverse image search within the app. And over the summer, the platform launched a slew of changes to message forwards aimed at slowing the virality.

Those moves have been met with generally positive reactions from fact-checkers. But this week’s announcement is the exception.

“What about new pieces of info which will take time to write?” Jacob said. “They really messed this up.”

young man holding I Voted sticker
Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News