June 13, 2017

Comments have seen better days.

In the last few years, several news organizations — NPR, Reuters and Recode, among them — have eliminated their comments sections because the internet can be a dark and terrible place.

One by one, financially challenged news organizations have thrown up their hands in exasperation and declared that they have better things to do with their precious time and money than wade through a swamp of toxicity to find a few reader-produced gems.

Today, The New York Times is going in the opposite direction. Using machine learning technology from Jigsaw, a technology incubator that belongs to Google parent company Alphabet, the Gray Lady is beginning a huge ramp-up of the number of comments that it allows on every story.

In doing so, it’s pointing toward a future where algorithms, not humans, do the majority of toxic swamp-wading.

“By the time we pull this off, we’re going to be somewhere around having a quarter of all Times articles available for comments,” said Bassey Etim, the community desk editor at The New York Times. “And then, over the next bunch of months, we’re going to be pushing that up as high as we can do wisely.”

Here’s how it’ll work, according to Etim: Right now, about 10 percent of all New York Times stories are open for comments. Beginning today, The New York Times is using Perspective, an algorithmically-driven application from Jigsaw, to separate the toxic comments from the healthy ones. That will free up The Times’ team of community editors to sort through comments on 25 percent of all stories. Eventually, the team hopes to push that number to 80 percent.

The remaining 20 percent are stories that likely wouldn’t be open to commenters anyway — breaking news articles with limited information that would invite speculation as to the identity of the perpetrator, say, or obituaries of notorious figures likely to be mocked, Etim said.

Before, New York Times community editors reviewed each comment as it came in, Etim said. Now, they will evaluate comments based on their toxicity, automatically scored by Perspective.

“We’re instead prioritizing comments based on the analytic likelihood that they will be approved,” Etim said. “Basically, what moderator does is plots out all moderated comments as dots on a histogram chart.”

Here’s an example that shows how Perspective scores comments. The New York Times uses a modified version called Moderator, which was developed collaboratively with Jigsaw.

An example of perspective, Jigsaw's automated toxicity screener

An example of perspective, Jigsaw’s automated toxicity screener

Perspective helps publishers separate the good from the bad in an era where people are leaving online conversations in droves, Jared Cohen, CEO of Jigsaw, said in a statement.

“People are either leaving the conversation entirely or comments sections are being shut down,” he said. “The power of machine learning offers us an opportunity to tip the scales and reverse this trend. This is why we built Perspective, technology that puts the power of machine learning into the hands of publishers and platforms to host better discussions online.”

Although Perspective is currently open to a select group of publishers, Jigsaw is planning to widen the API to a larger circle of publishers, platforms and developers in the coming months, with the eventual goal of open-sourcing the technology, said Dan Keyserling, the head of communications at Jigsaw.

“Toxicity online is a broad and global problem,” he said. “Any user of the internet understands that toxicity online diminishes the ability to host civilized discussions online.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Benjamin Mullin is the managing editor of Poynter.org. He previously reported for Poynter as a staff writer, Google Journalism Fellow and Naughton Fellow, covering journalism…
Benjamin Mullin

More News

Back to News