When Russia invaded Ukraine last week, misinformation immediately went viral.
While signatories of the International Fact-Checking Network verified images and videos depicting the war in a global database, citizen fact-checkers on Twitter’s Birdwatch were doing their part in flagging — and attempting to debunk — claims of their own.
At the time, only the roughly 10,000 users in the Birdwatch platform could see the less than 125 notes attached to flagged tweets. But Twitter today announced that some regular users would start seeing highly rated fact-checking notes on tweets in their feed. (For a detailed explanation of how the platform works, check out previous Poynter reporting here.)
“I’m really interested in seeing how it plays out, and how much of an impact I can make as an individual contributor,” said Ryan Tartaglia, a college student from Massachusetts and one of Birdwatch’s most active users. “It’s been a long time coming and I’m glad they’ve decided to do this.”
The announcement came after a Washington Post report suggested delays in Birdwatch’s public rollout, and in the middle of a seemingly constant misinformation crisis — between COVID-19, the 2020 election lie and now the war in Ukraine. Twitter has a partnership with professional fact-checkers to help surface reliable information, and it tags some viral tweets with misinformation labels, but the Birdwatch system uses crowdsourcing to surface and verify tweets.
“As online misinformation continues to spread, addressing false or misleading narratives at scale warrants a multi-pronged approach,” said University of Washington professor Amy Zhang in Twitter’s announcement. “As a member of the Birdwatch advisory board, I’ve weighed in on its unique approach to explore a community-based tool for adding context to Tweets, including its ranking system to surface the most helpful pieces of context, and I look forward to seeing how this community grows.”
However, as the Post story noted, the volume of Birdwatch notes (about 26,000 as of Feb. 24) is negligible compared with the thousands of tweets sent out every second. Professional fact-checkers — such as PolitiFact editor-in-chief Angie Drobnic Holan — and Birdwatch users alike remain skeptical about impact.
“I think Birdwatch is moving so slowly and (it’s) so small that I have serious doubts that Twitter seriously intends it to be anything more than a PR stunt,” said Aaron Segal, a Bronx, New York-based software engineer, and another power user. With so few notes reaching the threshold the platform surfaces as helpful, Segal estimated normal users may see one or two notes a day.
“Anyway it will probably upset the Birdwatch team members I’ve been talking to, who have been nothing but nice to me,” Segal said, “but they have not given me the impression that Twitter is giving this the resources or the reach it needs to have a chance.”
Along with the test flight, Birdwatch also debuted a revamped algorithm that takes into account users’ ideology to ensure a diversity of viewpoints and adds deeper complexity to how it ranks notes. When launched in early 2021, the algorithm was driven by less than 20 lines of code — now there are hundreds across multiple files.
In tweaking the system over the last year, Birdwatch notes have improved, with a greater percentage including trusted sources and less partisan rhetoric, according to multiple Poynter analyses of public data. Twitter, in today’s post, said that a survey found users were 20 to 40% less likely to agree with a misleading tweet after reading an attached note.
“With each expansion and update like this one, we’re going to find out more and more about whether Twitter is actually serious about tackling misinformation,” said Celeste Labedz, another one of Birdwatch’s most prolific users. “Or whether this is just a strategy to look good and placate concerns while still reaping the benefits in engagement that misinformation can provide to platforms.”