August 21, 2020

From false claims that drinking warm water with lemon protects against the coronavirus to high contamination rates among NATO troops based in Latvia, the pandemic has been ripe for many kinds of hoaxes and disinformation campaigns.

Between January and March, the Reuters Institute for the Study of Journalism noticed that the number of fact-checks rose by 900%, which probably means an even higher increase in fake news occurrences since many of them likely slipped through the net.

Although media literacy is essential to turning the tide, the use of automation and algorithms could help conduct fact-checking efforts at scale. In his 2018 report, Lucas Graves essentially identified two types of automated fact-checking: fact-checks that verify claims by validating them against an authoritative source or a story that had already been verified, and fact-checks that rely on “secondary signals” such as stance detection — a computing technique that determines whether a piece of text agrees or disagrees with a claim.

Here is an overview of journalistic uses and research projects looking at both aspects.

Verifying against a source or a previous fact-check

Squash: Duke University Reporters’ Lab has been experimenting with Squash, a computer program that transforms TV captions into strings of text, then matches them against a database of previous fact-checks. Squash’s goal is to verify politicians’ statements almost instantaneously, although its research team admitted the program still requires human help to decide whether it should broadcast its own findings.

Full Fact: The London-based fact-checking organization Full Fact is also able to spot dubious claims through TV subtitles, by matching them against its own catalog of verified fact-checks and by using reliable data, such as governmental statistics, to verify unchecked statements.

But even reliable data needs to be thoroughly checked. In Graves’ report, Full Fact’s founding head of automation stressed that official figures could easily be taken out of context, such as when the murder rate in the United Kingdom spiked in 2003, but only because the murders committed by a notorious serial killer the years before were officially included in the statistics at that time.

Chequeabot: Like Squash and Full Fact, Chequeabot — an initiative of the Argentinian fact-checking organization Chequeado — automatically scans national media for controversial statements. It then matches them against an existing database and creates text files that fact-checkers can share on social media. But Chequeabot is nevertheless impacted by the lack of raw data in Argentina, prompting Chequeado to look at partnerships with the government, but also with universities, think tanks and unions.

IFCN’s Chatbot: In the midst of the pandemic, the International Fact-Checking Network put together a database of fact-checks, now made of more than 7,000 entries in more than 40 languages. In May, the fact-checking alliance launched its own WhatsApp chatbot, which is able to dig through that database in order to respond to a user’s keyword request. First available in English, the WhatsApp Chatbot is now available in Spanish, Hindi and Portuguese.

Stance detection approaches

The University of Waterloo: A research team at the University of Waterloo, in Canada, is looking at stance detection in order to build a tool able to detect fake news by comparing claims with similar posts and stories. The researchers programmed algorithms to learn from the semantics found in training data and managed to accurately determine claims nine times out of 10. They envisage their solution as an assisting tool aimed at filtering out fake content, so as to help journalists pursue claims worth investigating.

MIT: One problem that stems from stance detection, though, is that it tends to reproduce our own biases toward language. For instance, negative statements are viewed as more likely to convey inaccurate content, while affirmative ones are generally associated with a sense of truth. This is what a MIT research team found while testing algorithmic models on existing datasets. It prompted them to develop new models. The team also called attention to the issue of claims being true at a moment in time, but no longer accurate past a certain point.

In his report, Graves also pointed to other cues that could help debunking fake information at scale. These could range from “stylistic features, like the kind of language used in a social media post or a supposed news report” to “the network position of a source” or “the way a particular claim or link propagates across the internet.”

But as advanced as automated solutions are, they are still challenged by the many reasons we’re drawn to believe fake news in the first place — whether it’s biased reasoning, distracted attention or repeated exposure, for instance. Moreover, there’s the additional risk of running the “backfire effect,” a notion that predicts that when a claim strongly aligns with someone’s ideas, this person is further reinforced in his or her own views once being exposed to the truth.

In the end, automated fact-checking will only be successful if closely intertwined with media literacy.

Samuel Danzon-Chambaud is a Ph.D. researcher on the JOLT project, which has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 765140.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Samuel Danzon-Chambaud is a researcher with Dublin City University's Institute for Future Media and Journalism. He investigates the impacts of automated journalism on media practitioners…
Samuel Danzon-Chambaud

More News

Back to News