March 12, 2019

A joint initiative of Harvard University and the Massachusetts Institute of Technology has awarded $750,000 to projects that aim to study how artificial intelligence can be used to improve journalism. Fact-checking was front and center.

Three of the seven projects that won The Ethics and Governance in AI Initiative’s challenge have to do with how AI can be used to combat misinformation. The projects were selected from more than 500 submissions and announced Tuesday in a press release and blog post. The anti-misinformation projects total $275,000 in winnings — nearly half the total prize money awarded.

“These winners showcase both the opportunities and challenges posed by artificial intelligence,” said Tim Hwang, director of the Ethics and Governance of AI Initiative, in the release. “On one hand, the technology offers a tremendous opportunity to improve the way we work — including helping journalists find key information buried in mountains of public records. Yet we are also seeing a range of negative consequences as AI becomes intertwined with the spread of misinformation and disinformation online.”

The three winning anti-misinformation projects are aimed at the former.

Fact-checking on WhatsApp

First is Tattle Civic Technologies, a startup that’s trying to connect WhatsApp users to fact checks in real time. The company is using $100,000 from The Ethics and Governance in AI Initiative to support existing fact-checking efforts by creating channels where users can submit potential misinformation. Then, the goal is for machine learning models to automatically categorize those tips and distribute relevant fact checks.

Misinformation on the private messaging platform is notoriously difficult to debunk, as WhatsApp is encrypted and not even its own employees can see what’s being shared. Independent fact-checkers worldwide have gotten around that by asking their readers to message them potential examples of misinformation, which they fact-check and then redistribute to readers, asking them to share among their groups.

(Courtesy Tattle)

WhatsApp has taken some steps to make that process easier in recent months, including training on its WhatsApp Business app and limiting the number of message forwards to five worldwide. But, particularly in India, rumors on WhatsApp are still stoking tensions.

Detecting deepfakes

Second is the Rochester Institute of Technology, which is using $100,000 to build approaches for automatically detecting “deepfake” videos. Specifically, the university will combine audio, visual and language information to make its detection system much harder to fool. Then, the goal is to generate an “integrity score” for users that indicates how much the video may have been altered.

Deepfake videos, which have received a lot of press attention recently — including a dedicated session at South by Southwest — are altered videos that leverage AI to make it look like someone did or said something that never actually occurred. Motherboard first wrote about the phenomenon in December 2017 in the context of fake porn videos.

Despite the rancor, the immediate threat of deepfakes is pretty low. They are still fairly hard to produce. And other detection projects, most notably from Matthias Nießner at the Technical University of Munich, are getting better at detecting patterns in manipulated videos.

Ethics and AI

The third fact-checking project that won The Ethics and Governance in AI Initiative’s challenge is Argentina-based fact-checking outlet Chequeado. It’s using $75,000 to partner with journalists around Latin America to produce a series of investigative articles about the ethical considerations of AI in the region. Chequeado will also train journalists and create a guide for how to report on such technology.

While not explicitly an anti-misinformation project, the idea stems from Chequeado’s past experimentation with AI to bolster its fact-checking process. Last January, the outlet rolled out Chequeabot, a system that automatically identifies fact-checkable claims in the media and sends them to Chequeado staffers just in time for their weekly meeting.

“We like to think that Chequeabot is part of the meeting because he or she can tell us what we should fact-check that week, usually very accurately,” said Editorial Innovation Director Pablo Martín Fernández at the time.

Chequeado has stepped up its investigative work in the past few months. It recently published an in-depth project about police shootings. Director Laura Zommer told Poynter in a message that it plans to do more in the future.

young man holding I Voted sticker
Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News