By:
December 17, 2018

For every technological solution we create to fight misinformation, a new problem seems to arise.

This is the main lesson from 2018 for Aos Fatos, the Brazilian fact-checking organization that I lead. We covered this year’s general election with a particular focus on developing bots and artificial intelligence in order to tackle fake news in a hyperconnected society.

The Brazilian election season was marked by an unprecedented industrial use of disinformation as a campaign strategy and by the lack of prompt, structured institutional answers against it.

Since the beginning of 2018, Brazilian fact-checkers, with the help of international counterparts like the IFCN, tried to schedule meetings with platforms, election authorities and supreme court justices in order to alert to the risks of having an election tarnished by conspiracy theories, hate speech and fraudulent information weaponized mostly through WhatsApp.

While Facebook, Google and Twitter built more or less transparent strategies to combat fake news, other platforms such as WhatsApp remained silent. Its overall apathy regarding the issue damaged the informational environment in Brazil, even though we insistently tried to create a channel of communication in order to build tech solutions together.

In Brazil, WhatsApp’s more relevant partner was Comprova, a coalition of 24 media outlets led by the Brazilian Association of Investigative Journalism and First Draft that was created to fact-check viral pieces of misinformation during the presidential campaign. Comprova used a tool called Zendesk, through which they could access WhatsApp’s API.

One researcher close to the coalition told me, however, that the tool is more useful for one-on-one conversations rather than for use by journalists.

“We went through the same steps and channels as an airline company that uses the API as a customer service does,” the researcher said, highlighting the lack of scalability of the dissemination of trustworthy, verified content.

Aos Fatos opened a WhatsApp Business account in August to receive misinformation reports and send users verified content. Even though the intentions are good, the technology is poor. The app crashes, our journalists have to add thousands of users manually, it only syncs with one device at a time. Even so, to get a sense of what kinds of misinformation were zinging around WhatsApp during Brazil’s election season, Aos Fatos crowdsourced from its over 6,000 WhatsApp subscribers more than 700 false or misleading posts that were shared on the app.

Since WhatsApp uses end-to-end encryption, we don’t know the frequency or reach of misinformation on the platform. What we know is that those rumors distorted at least four main themes: statements made by politicians and celebrities, theories about the security of our electronic voting system, images from demonstrations against and pro-Bolsonaro, and results of opinion polls.

On Facebook, where reach can be measured, the WhatsApp misinforming posts we analyzed were shared at least 3.5 million times in the period of August through October.

Regardless of the novelty of the industrial use of misinformation, the great storm we witnessed in 2018 traces its origins at least as far back as 2012, when researchers started to find traces of artificial actions, such as bots, used to spread false information in social media. There were many visible episodes in Brazil since then, such as the influence of bots in 2016’s impeachment campaign, or the August 2017 campaign to pressure organizers of an art exhibition on gender and sexual diversity in the southern city of Porto Alegre to cancel the show.

These coordinated actions spread across the most popular social media platforms in Brazil in a structured way. The use of bots and fake accounts across platforms has the purpose to create a perception of the universality of information. If you read something that you believe is true on WhatsApp, and then you go to Facebook in order to understand better what’s going on and read something similar in your timeline, why not believe in it? It’s everywhere.

After Facebook partnered with fact-checkers in Brazil in order to verify potentially fake posts on its platform, the broader army of political misinformation migrated to WhatsApp — an application that electoral regulation issued by the Superior Electoral Court ignored while building rules for political propaganda inside Facebook and Twitter.

In closed spaces such as WhatsApp, however, the strategies to disrupt the informational environment are different. In order for misinformation to take hold, it can’t rely on a handful of viral posts (like on Facebook) but rather a huge amount of similar messages on the same subject, with minor changes.

Automation could help address this challenge.

In a Columbia Journalism Review article published in August, Himanshu Gupta and Harsh Taneja proposed some solutions on how WhatsApp could tackle the misinformation problem in its platform without completely breaking encryption. One of these strategies converges with what Aos Fatos’ @fatimabot does on Twitter, namely detecting patterns of massively shared content, such as similar URLs.

WhatsApp should consider doing the same at least with images and videos — two of the most shared content in its platform, as Truthbuzz Fellow Sérgio Spagnuolo wrote in October.

WhatsApp’s encryption security paper states that the company identifies each attachment with a cryptographic code. Whenever a downloaded attachment is being forwarded through the app, WhatsApp checks if a file with the same cryptographic code already exists on its server. It basically means that the content is stored on the company’s server.

The purpose is to make content sharing more efficient, without requiring a user to upload and download the same file every time. There is also some evidence that WhatsApp might be able to track text too, in light of actions taken by WhatsApp engineers to combat spam. Through this same approach, WhatsApp could track not only massive misuse of its technology, but actually know what’s going viral without needing to break the entirety of its encryption.  

Also, if something is going viral, does it have to be encrypted?

Through analysing patterns of distribution combined with user participation and content verification provided by fact-checkers, WhatsApp would be able not only to find out false information, hate speech content and other sorts of deception spread by its app — it could also alert only the users that have spread the misinformation piece without being a spammer, just like @fatimabot does on Twitter.

For instance, the company has failed to explain measures taken against hacking its encryption technology. The company says that it doesn’t track at any level content shared inside its platform because of its sophisticated encryption technology, but how could WhatsApp allegedly ban last October 100,000 users in Brazil for spreading fake news and spam? At that time, they didn’t explain and as a fact-checker it’s hard to square their denials with their actions.

There are solutions at scale to fix WhatsApp’s informational environment, but none of them have been taken seriously by the platform yet. We hope that 2019 will be a year of change.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
CEO at Aos Fatos, board member at Ajor and 2020 Gabriel García Márquez Awards winner
Tai Nalon

More News

Back to News