Diego Maradona isn’t dead, and he’s willing to pay $10,000 to figure out who said he was.
The famous retired soccer player for Argentina was the subject of an internet death hoax in late June. The rumor claimed that Maradona had suffered a cardiac arrest and died after Argentina’s World Cup match with Nigeria.
Now the 57-year-old is offering a reward to anyone who can identify the source of the rumor.
By the way, Diego Maradona is not dead. He's just fine. pic.twitter.com/gvFNMfRWrh
— Oh My Goal (@OhMyGoal_US) June 28, 2018
Death hoaxes are a classic form of online fakery. But this one is different: The hoax was distributed as an audio message on WhatsApp.
Over the past year, fake audio messages have been slowly making the rounds on WhatsApp, a Facebook-owned messaging platform with more than 1 billion users in more than 180 countries. Gisela Pérez de Acha first noticed them in the aftermath of an earthquake in Mexico City in September, when she helped run the collaborative verification project Verificado 19S.
“I think it was a colleague or friend that forwarded me a WhatsApp audio message regarding … a kindergarten school that collapsed during the earthquake,” the lawyer and activist told Poynter. “It was this person saying, ‘I’m outside of the collapsed kindergarten school, there’s this and this happening and we need this.’ It was really paranoid and a very alarmist tone.”
Aside from the tone, the message stood out to Pérez de Acha because it was well-produced — there was no background noise, even though the messenger claimed to be standing outside a collapsed building. Then she received another audio message claiming a city-wide earthquake alarm was set to go off again, and that the information was verified by Verificado 19S.
“I got that from a colleague telling me this could be a PR mess, please fix it. I was trying to call media,” Pérez de Acha said. “It turned out to be a fake, but what was striking to me was that all the other rumors didn’t seem to have malicious intent behind them, but this did.”
For WhatsApp users, audio messages are a popular way to update their family and friends without having to type anything. Similar features exist for platforms like Messenger and iMessage, but the medium has a special significance on WhatsApp, whose predominantly non-U.S. user base uses it to communicate with almost everyone in their lives.
“That’s where fake news thrives — through WhatsApp,” Alba Mora, executive producer of AJ+, told Poynter. “Especially in Latin America, and Mexico in particular, we use WhatsApp for everything. I have people sending me PDFs in WhatsApp, you have video calls with your mom in WhatsApp, you have work groups in WhatsApp — everything goes through WhatsApp.”
That intimacy, and its encryption, is part of what makes it hard to fact-check viral claims on WhatsApp. The company says that the average group size is six and about 90 percent of messages are sent between two users, so when people receive a message, they might be more likely to believe it than if they saw the same content on a platform like Facebook or Google.
“Hoaxes travel in all sorts of formats and platforms,” said Tania Montalvo, editor of Animal Político. “And WhatsApp works differently as a platform — so it’s not like the algorithm prioritizes viral content. People share these things.”
Audio messages just pile onto that already fraught misinformation ecosystem. During the recent Mexican election campaign, Mora led a team for the collaborative fact-checking project Verificado 2018 — inspired by Verificado 19S — that exclusively fact-checked claims on WhatsApp. They solicited rumors from readers through an institutional account, then debunked hoaxes in their status, a feature of WhatsApp Business accounts.
During that process, she said she saw plenty of fake audio messages go viral.
In one case, Verificado 2018 received a four-minute audio message claiming that a throng of attendees at AMLOFest, an event hosted by then-presidential candidate Andrés Manuel López Obrador (commonly called AMLO) to commemorate the closing of his campaign, gathered at a store to buy TVs with prepaid cards. The goal of the message was probably to criticize AMLO followers and to falsely accuse his party, Morena, of buying votes — a common practice in Mexico.
Mora’s team fact-checked it and found that the event was actually a sale for beneficiaries of a social program in Coyoacán. So they distributed a verdict in kind.
Meanwhile, on the other side of the world, a similar trend was underway during elections in India.
During the run-up to Karnataka state elections in May, fake audio messages designed to exacerbate divisions between Hindus and Muslims began circulating on WhatsApp, The New York Times reported. Following the distribution of video claiming to be a Muslim mob attacking a Hindu woman (it wasn’t), anonymous audio messages were sent urging both religious groups to vote for opposing parties because of the footage.
Despite the reports, Govindraj Ethiraj said fake audio messages aren’t causing any problems for Indian fact-checkers — yet. The vast majority of what they’ve seen are videos taken out of context or doctored images.
“I don’t think we’ve seen too many fake audio messages. Fake videos for sure — usually doctored images, videos that are sliced and diced in a manner that are a completely different story or impression,” said the founder of Boom Live, an Indian fact-checking organization. “Maybe it’s because we’re not so much of an audio country.”
Still, the format presents a challenge for fact-checkers who rely on search engines like Google to find related claims, images and videos online. Montalvo said audio messages are totally different because they aren’t easily searchable, and they don’t have any visual clues that give away when and where they could have been created.
WhatsApp announced on Tuesday a new feature that labels forwarded messages on the platform — a move that, when paired with the company’s efforts to work with fact-checkers and academics, is aimed at limiting the amount of misinformation that people share. WhatsApp told Poynter in an email that the label will apply to all types of messages.
While it’s still too early to tell whether or not that label will have measurable effects on the amount of fake media people share (WhatsApp told Poynter the feature will give them greater insight into how often messages are forwarded), fact-checkers can still do what they've always done to check audio messages.
"It just comes down to traditional fact-checking and verification: What claims made in the audio message can be confirmed, and which not?" said Christiaan Triebert, a digital investigator and trainer at Bellingcat, in an email to Poynter.