September 7, 2016

It is easy to get excited about the future of automated fact-checking, with all the promising projects being led by journalists, tech companies and academics. But it’s also early days for the field, and many early projects are unlikely to yield anything practically or commercially viable.

Which is precisely why such pilot projects are instructive. We looked back at two of the most hyped automated fact-checking tools of recent years to ask what they got right, where they came up short and what this tells us about future development.

Specifically, we focused on Truth Goggles, a master’s thesis project designed to highlight questionable claims and display relevant findings from fact-checkers and The Washington Post’s Truth Teller which compared video against fact-check databases. Below are five lessons about automated fact-checking gleaned from a look at both products.

1: Know when the technology’s not ready.

Dan Schultz, now senior creative technologist at the Internet Archive, built Truth Goggles from 2010 to 2012 as part of his master’s thesis at MIT Media Lab. The idea: As a user read a story in a control panel, the app’s “matching engine” would search the text for paraphrases of previously fact-checked claims. The app could then bring up the relevant fact checks.

But at the time, natural language processing wasn’t sophisticated enough to deliver on this promise. A test of his control panel revealed his pilot study found that the interface was successful in changing beliefs. But he had to run the test using claim-fact pairs that he chose by hand.

“The matching engine was lightly explored and partially implemented, but ultimately did not yield strong results,” Schultz noted in his thesis.

Today, Schultz said, the situation is different.

“Algorithms around (natural language processing) and the tools out there to help form associations between random text and the corpus of text are more robust,” he said.

In particular, he cites the startup Luminoso, whose earlier products he used for Truth Goggles.

“What they do is kind of the exact problem that needs to be solved — which is, I have a corpus of small pieces of information, and I have to identify when those small pieces of information are relevant or can be found in a large narrative piece of information.”

What such products don’t yet understand is the nuance and context that can change the meaning of a statement.

And there’s another big pinch-point: speech-to-text translation for fact-checking audio and video.

“Automated fact-checking has been made possible by truly brilliant research in (natural language processing), but that’s one thing not cracked to that level,” said Will Moy, director of U.K. fact-checking organization Full Fact, which recently published a report on the state of the art in automated fact-checking.

2: Don’t bite off more than you can chew.

Perhaps it was particularly ambitious four years ago when The Washington Post revealed Truth Teller, an app that it hoped would eventually fact-check live TV. The paper asked the Knight Foundation for $700,000 but obtained $50,000. So, Poynter reported in 2012, the dream of end-to-end automated fact-checking — convert speech to text, compare to a fact-check database, report back to the user — was no longer tenable.

But in January 2013 the Post debuted a prototype that aimed to do all that, built in three months, with input from Schultz. The fact-checking was even live, in a sense: “Each time the video is played the fact checking starts anew,” then-executive producer for digital news Cory Haik described in a blog post. Haik said more technical work was needed, but she had high hopes for the project: “Can this be applied to streaming video in the future? Yes. Can this work if someone is holding up a phone to record a politician in the middle of a parking lot in Iowa? Yes, we believe it can.”

The Post beta-launched Truth Teller in September 2013, and for a while it seemed like a going concern. Visitors to PostTV could see the app call up fact checks on debates, and the Post partnered with the Texas Tribune to deliver coverage on Texas politicians, such as Ted Cruz. Posties even ran Truth Teller on a few movie trailers.

But the Post readily admitted that Truth Teller wasn’t completely operating on its own steam. At beta launch, the paper wrote that it had to keep teaching its algorithm the various ways a false claim could be phrased. And it said the speech recognition could cause some “head-scratching transcription errors”, meaning the technology had to be supplemented with human reporters and editors.

Soon after that, things went a little quiet. A search on the Post’s video page shows no Truth Teller videos after 2014, though reporters did post a few Truth Tellers on movie previews as late as this February.

The Washington Post never officially announced any hold or end to the project. And it turned down repeated requests to share its lessons-learned for this article.

Molly Gannon Conway, communications manager for The Washington Post, told Poynter that “Truth Teller launched in beta and was intended to be an experiment, and is something we haven’t put resources toward in a couple of years.”

Instead, she said, the Post’s focus is on the Fact Checker column written by Glenn Kessler and Michelle Ye Hee Lee.

None of which is surprising to the fact-checking technology experts we spoke to.

“Truth Teller was trying to go from zero to 60 in one jump. It’s much easier to do that in a demo project than take that and make a live piece of software that’s in routine use,” Moy said.

In contrast, the Full Fact report argues, useful tools will come about more quickly if each development team focuses right now on addressing just one or two of the four fact-checking steps: monitoring, spotting claims, checking claims, and creation/publication.

Both Schultz and Knight Foundation director of media innovation Chris Barr identified the University of Texas at Arlington’s claim-spotting tool ClaimBuster as a project that’s taken on a useful and automate-able but limited part of the fact-checking process, and seems to be doing it well.

3: Psychology is key.

Schultz said underdeveloped natural language processing capabilities weren’t the only hurdle he faced in developing Truth Goggles. One that he really focused on as he developed the app interface was “the human challenge”: As a growing body of social science research shows, people are unlikely to accept a fact check that challenges their identity or worldview. Many political fact checks fall in that category.

Schultz’s response to that problem was to take a more oblique approach. His technology wasn’t able to declare whether a given statement was true or false — nor, he said, would he want it to. Instead, the app asked test users, “Are you sure this is accurate?” Schultz argues that simply prompting critical thinking about controversial subjects could be a more promising approach than telling people, “this is true, because PolitiFact said it’s true.”

Plus, an automated finding of “True” could have a serious drawback. “If it’s wrong, it’s really bad. It’s lying to a user and users that trust it are now less informed,” Schultz said.

4: Think about your business model. Hard.

As important as it is to discover what is technologically possible, there also comes a point when continued work on a prototype is fruitless, if no one’s going to pay for the product.

Fact-checking tools can follow a number of business models, each with their own problems. One approach is to create a customer-facing app. This raises the question of whether consumers will actually spend money on the tool or would even download a free browser extension. After all, most people think it’s everyone else who’s misinformed. Schultz argues that such a product may do much better outside the deeply personal sphere of political beliefs — perhaps in health, science or finance.

Another option is creating tools to help fact-checkers and other journalists — also a tough sell for cash-strapped newsrooms. Moy said the tools they’re working on should help them create many more fact checks with the same size team, ultimately increasing the (nonprofit) organization’s effectiveness.

5: Don’t be afraid to trash it.

Part of the reason we’re able to dissect these early attempts at fact-checking technology is, of course, because their developers took a chance.

“With the prototype fund we are focused not on necessarily building production-ready finished products,” said Knight Foundation director of media innovation Chris Barr. “We’re really looking at people who are asking audacious questions…to hopefully get us to that point.”

Barr said Truth Teller was Knight’s first experiment with automated fact-checking, and as such helped the funder think about what’s possible now and what may be in the future.

And Moy notes that a willingness to kill your darlings is crucial for technological innovation. “It is an iron law of software development, coined by a guy called Fred Brooks who worked at IBM, that you will always make one prototype that you have to throw away — so you might as well do it deliberately.” He notes that some of Full Fact’s own tools are on their third iteration, with previous coding thrown away.

But he said that the young science of automated fact-checking has already learned from some of its early experiments — and the time has come for more focused, coordinated efforts.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tamar Wilner is a freelance journalist, researcher and consultant who writes about the media, misinformation and fact-checking. You can find her at www.tamarwilner.com and @tamarwilner.
Tamar Wilner

More News

Back to News