Twitter rolled out a test version of its long-awaited edit button for some users last week. The button allows users to edit already sent tweets, a feature that Twitter leadership deliberately avoided for much of its 16-year lifespan, despite persistent pleas from users who have mistyped.
The feature will be available for the first 30 minutes after a tweet is published and readers will be able to tap an icon on the tweet to see its edit history.
“Think of it as a short period of time to do things like fix typos, add missed tags, and more,” Twitter wrote in a blog post announcing the move.
Fact-checkers reacted positively to the prospect of corrections, though also highlighted potential pitfalls of the change.
“One concern I could see is if bad actors share misinformation in tweets and then later quietly modify the tweet after it goes viral to avoid being held accountable,” said Emmanuel Vincent, the founder of Science Feedback, a French science-focused, fact-checking and research organization.
“It may be one of the best practices for the platform to ensure that the tweets that go viral may not be misused or that any tweets won’t be subsequently edited to bring harm,” said Deepanjalie Abeywardana, head of media research at Verité Research, a Sri Lankan research group. “The hesitation on the feature surfaces a larger concern with regard to the information disorder in the digital space.”
Gemma Mendoza, head of digital strategy at Rappler, a leading news organization in the Philippines, suggested platforms go further than a text-edit feature and allow users to update multimedia files.
“Platforms should also make it possible for people to update an asset like an image or video if there is a correction needed,” Mendoza said. “The edit feature should also take care of contextualizing the update. Right now, you can’t do that in most platforms.”
Mendoza also highlighted that the ability to edit the original post may be better than simply issuing a correction to a false post because it captures more of the same audience.
“The original link matters because that is what has gone viral. You want those people who have seen the original link to see the correction,” Mendoza said. “Because the audience affected by a post with incorrect or false information may not be the same audience reached by the correction.”
Ana María Saavedra, the director of Colombiacheck, a fact-checking organization based in Colombia, wants to know more before coming to a conclusion. Twitter Blue, to which the feature will initially be confined, is still not available in Colombia.
“I think that misinformers still end up adapting to the attempts of the platforms to combat them,” Saavedra said.
On Facebook, the edit feature has been around for a long time without a 30-minute time limit.
“So far we have not encountered an attempt to spread disinformation by editing posts,” said Marcel Kiełtyka, a spokesperson for Demagog, a fact-checking organization based in Poland. “But that does not mean that such a situation will not take place on Twitter, which is characterized by high dynamics and a completely different way of consuming content.”
For example, a user could change the meaning of the tweet entirely after it receives substantial engagement. Parallel examples exist on Twitter of users changing their display names after being quote-retweeted by prominent accounts to purvey unwanted messages to those accounts’ audiences.
Twitter has said that the 30-minute time limit and the visible edit history both play an important role by “protecting the integrity of the conversation,” though given a large enough audience, 30 minutes is plenty of time to go viral.