Editor's note: This article has been updated with a response from Twitter.
Over the past couple of years, Twitter has done the bare minimum to fight fake news, avoiding the kind of negative press that has plagued Facebook in the process. And for a while, that strategy worked — until now.
This week, pretty much every major technology platform took action against Alex Jones, a notorious conspiracy theorist and host of InfoWars. It came after nearly a month of coverage from media and tech reporters about InfoWars’ continued existence on the platforms, in spite of being repeatedly debunked by fact-checkers.
On Sunday, Apple took down the bulk of the site’s podcasts from iTunes and the Podcasts app, citing a violation of the company’s hate speech policies — specifically in the way Jones had talked about immigrants, Muslims and transgender people. (Apple has still not removed the InfoWars app.) Facebook and YouTube, which had offered varying reasons for keeping Jones on their platforms in the past, quickly followed suit, as did Spotify and even MailChimp.
Notably absent from that group is Twitter, where InfoWars and Jones have a combined following of nearly 1.3 million.
In a tweet, CEO Jack Dorsey defended the decision to keep Jones and InfoWars on Twitter, saying that neither had violated the platform’s rules and that the onus should be on journalists to fact-check them.
Accounts like Jones' can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions. This is what serves the public conversation best.
— jack (@jack) August 8, 2018
And journalists have taken the tech company to task over it.
— Caroline Moss (@socarolinesays) August 8, 2018
this is where you lost me
What is it that you think journalists do? Spend all day combing Twitter to fact check Alex Jones? That's good for Twitter but not for democracy. There's a bigger world out there, @Jack
good night https://t.co/1zBOwykrgu
— CeciliaKang (@ceciliakang) August 8, 2018
Attention @jack, Twitter terms of service expressly forbid posters to "in any way use Twitter to send altered, deceptive, or false source-identifying information." Isn't this "false source-indentifying information"? https://t.co/ZYX7FmlrPk
— (((JonathanWeisman))) (@jonathanweisman) August 8, 2018
Others pointed out that Twitter’s policies are very similar to those of other platforms, notably its rule against “abuse and hateful conduct,” and it seemed like the company wasn’t applying them uniformly. Even Twitter’s own former head of global policy communications dissented with Dorsey’s reasoning, saying in a tweet that keeping Jones and InfoWars on the platform doesn’t make sense.
.@jack, please don’t blame the current state of play on communications. These decisions aren’t easy, but they aren’t comms calls and it’s unhelpful to denigrate your colleagues whose credibility will help explain them 1/4 https://t.co/IKo5UiWiWH
— Emily Horne (@emilyjhorne) August 8, 2018
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.
“They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact. “And their attitude seems to be that they’re just doing nothing compared to what Facebook and Google are doing to combat fabricated news and hoaxes.”
Twitter has taken small steps to combat misinformation on its platform. In February, the tech company cracked down on bots by banning the publication of similar posts by different accounts. In May and June, Twitter removed more than 70 million accounts, slowing its user growth, The Washington Post reported.
But Holan said those actions should almost be a given for any tech platform, especially one where misinformation regularly goes viral after breaking news.
“It just seems like the minimum standard to keep phony accounts off social media platforms that are supposed to be about dialogue between real individuals,” she said. “I think they’ve been trying to keep their heads down in the hopes that they won’t be noticed.”
And compared to Facebook, Google and YouTube, Twitter really hasn’t really done much to address the ongoing challenge of misinformation — in spite of pledges to fix the "health" of conversation.
Facebook partners with more than 25 fact-checking projects around the world to debunk and flag fake stories and images on the platform, which decreases its future reach by up to 80 percent. (Disclosure: Being a signatory of the IFCN’s code of principles is a necessary condition for participation in the project.) Google surfaces and highlights fact checks high up in search results by using the Schema.org ClaimReview markup, and even YouTube recently announced that it will surface “authoritative” sources high up in search results during breaking news.
RELATED ARTICLE: In Rome, Facebook announces new strategies to combat misinformation
While there’s ample reason to doubt that Facebook and Google’s efforts are working, Twitter doesn’t even have any comparable programs, aside from aiding a collaborative fact-checking project during the recent Mexican elections. And it’s not like the company isn’t aware of efforts at other companies — fact-checkers have repeatedly asked Twitter for similar partnerships.
“(Agência) Lupa has its Twitter account as the most active social media and has reached (out to) Twitter many times for partnership,” said Cristina Tardáguila, director of the Brazilian fact-checking project, in a WhatsApp message. “Unfortunately, we haven't managed to establish a partnership. We (have worked) with Google and Facebook for over a year, but not with Twitter.”
“I really think Twitter should try to partner with IFCN verified members to make Twitter a better place to get information from. We have tried many times.”
Twitter was also invited to the Global Fact-Checking Summit (hosted by the International Fact-Checking Network) in June, but the company did not attend. Both Facebook and Google were represented at the conference.
When Poynter asked Twitter in May about the potential for a partnership with fact-checking projects like the one that Facebook has, a spokesperson told Poynter that the company wasn't considering it because Twitter's approach to misinformation is different. When asked again this week, a spokesperson just sent a June 2017 blog post from Colin Crowell, vice president of public policy, government and philanthropy.
"Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information," he wrote in the post. "This is important because we cannot distinguish whether every single tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth."
Tardáguila said Twitter has invited Brazilian fact-checkers to a meeting on Friday, at which she hopes they’ll initiative some kind of partnership with fact-checkers ahead of this fall’s election. But even without partnering with fact-checkers, Adam Sharp, former head of news, government and elections at Twitter, told Poynter that fact checks are more likely to get organic reach on Twitter than Facebook anyway.
“If I go into search results, when I look at the top-engaged tweets on that story — while the algorithm might still have the first tweet be the hoax — usually the contrasting views are going to be part of that first batch of tweets surfaced by the algorithm,” said Sharp, who’s the interim CEO of The Emmys. “That’s not always the case, but that’s certainly more so than on Facebook.”
It’s true that, generally, posts tend to get less engagement on Twitter than Facebook. And fact-checking projects like Aos Fatos in Brazil have already developed tools that automatically post debunks in replies to users who publish fake news links.
Still, fact-checkers say that Twitter’s inaction makes it look like the company is giving license to would-be hoaxers and imposters.
“I’m concerned that, by the amount of fake news on Twitter, it just seems to be allowing them to run with it — and impersonation seems to happen with regularity,” Holan said.
Aside from its lack of collaboration with fact-checkers, that’s another area that Twitter has been notoriously bad at policing. After a shooting at Marjory Stoneman Douglas High School in February, Miami Herald reporter Alex Harris was targeted by several imposter tweets that made it look like she was asking eyewitnesses for images of dead bodies.
“I decided to report it to Twitter and Twitter responded saying it wasn’t targeted harassment or violate any rules,” she told Poynter at the time. “That was kind of not great. I felt extremely overwhelmed.”
That wasn’t the first time Twitter had a non-response to a report of impersonation. In a CJR piece published in late July, University of Georgia professor Tim Samples laid out a situation in which an imposter Twitter account used his real name and headshot. The account, which pretended to be a conservative essayist, racked up more than 50,000 followers — and Twitter didn’t do anything about it.
“I contacted Twitter immediately, by filing an impersonation claim,” Samples wrote. “Four days later, having supplied a photo ID and information verifying my identity, I received an automated response: ‘We were unable to take action on the account given that we could not determine a clear violation of the Twitter rules.’”
According to Twitter’s rules, a user “may not impersonate individuals, groups, or organizations in a manner that is intended to or does mislead, confuse, or deceive others.” When asked about what happened to Harris in May, Twitter told Poynter that the company didn’t have a specific policy against fabricated tweets — just against fabricated accounts.
To Holan, that lack of clear policy-making about misinformation is the source of Twitter’s problem.
“I think they could do something like Facebook, where they would downgrade accounts. I think it could suspend accounts for purveying fake news,” she said. “They should know the potential solutions better than me. I just refuse to believe that there are no solutions — there are plenty of solutions."
Correction: A previous version of this article incorrectly stated that Adam Sharp was interim CEO of The Grammys. In fact, he's interim CEO of The Emmys.