Watchers on your wall
Ryan, a college student without a journalism background, accomplished something many a fact-checker like myself dreams of doing. On June 15, he fact-checked Twitter CEO Jack Dorsey on his own platform — and Dorsey followed up with a clarification.
“If he sees this, sorry <3,” Ryan messaged me in a recent Twitter DM. He’s flagged more than 270 tweets with “notes” on the platform’s crowdsourced fact-checking feature called Birdwatch — making Ryan the No. 2 most prolific user. (We are not using Ryan’s last name as he’s not a professional fact-checker and fears being doxxed or harassed.)
Birdwatch is Twitter’s latest and most public effort to address misinformation on its platform. The program, still in its testing phase, allows users (or “Birdwatchers”) to attach notes that provide additional context to a contested tweet. For example a Birdwatcher may append a link from NASA or fact-checking organizations like Science Feedback to a tweet claiming the sky is green.
I’ve watched Birdwatch blossom from when it used just a few lines of code to rank its notes to a complex system that incorporates the helpfulness of a user’s previous notes, and soon, with their ideological perspective.
Of the top 10 rated notes on the platform, seven include at least one link to a reputable source like the U.S. Centers for Disease Control and Prevention or PolitiFact. In an analysis I ran earlier this year, less than half in my dataset had a source (and most were just other tweets).
And although you will occasionally come across juvenile refuse-themed notes like this one, “This is 💩,” the amount of idiotic or partisan bickering is way less prevalent in the helpful notes Birdwatch is surfacing.
I was curious to know who the heck these normal people who log so many hours correcting misleading tweets are and why the heck do they do it. I followed and reached out to the 10 most active Birdwatchers to learn more.
Besides Ryan, I also got in touch with Celeste Labedz, a California Institute of Technology Ph.D. candidate who spends most of her Birdwatching correcting phony earthquake predictions, and Aaron Segal, a 33-year-old software engineer from the Bronx, New York. (Their comments have been lightly edited for length and clarity.)
Why do you do it?
Celeste Labedz: I’m a seismologist and I like to try to combat misinformation about earthquakes, so I keep tabs on a few highly-followed earthquake “prediction” charlatans and put the same Birdwatch note onto every one of their pseudoscientific tweets.
Ryan: For a long time, at least a few years, I’ve gotten very annoyed whenever I see something with thousands of engagement that I know is false, whether it be serious or even just something little about a video game. I always tried to tell people in the comments, but it was no use. The damage (small or not) had already been done.
How much time do you spend on it?
Aaron Segal: These days I spend an hour or two a week on Birdwatch. I used to go on most days but I’ve stopped bothering so much recently because I don’t actually think it’s effective.
Ryan: Most of my Birdwatch notes are spur-of-the-moment. I write them when I see misinformation that needs to be corrected during my typical Twitter browsing. But sometimes I do go down long rabbit holes, searching for keywords and monitoring repeat offenders, and that could last upwards of an hour. (Ryan keeps a massive text document with keywords to search.)
What does Twitter need to do to improve Birdwatch?
Labedz: For my particular usage (debunking obvious misinformation in my field), it would be cool if experts in fields could somehow be verified as such so their notes could be prioritized. But, of course, I recognize that that’s a very complicated and potentially gameable system that could give unfair advantage and so probably shouldn’t be implemented!
Segal: They should add more moderation. They should also take people who keep writing unhelpful notes off of Birdwatch, and people who keep writing misinformation off Twitter.
Ryan: I’ve suggested the ability to allow normal users to report for misinformation, and have those reports be looked at by Birdwatchers.
In my last piece, PolitiFact editor-in-chief Angie Holan suggested incentives for Birdwatchers. Should Twitter be paying people like you?
Ryan: I suggested they should do either some form of payment or, more realistically, a Birdwatch badge for your profile. The dopamine from helpful votes already works for me, though.
Segal: If they want a community-based system like the one they’re testing, what they should really do to incentivize people is make the system more effective. If writing and rating Birdwatch comments could have the effect of removing misinformation or hatred from Twitter, and people could see the effect they were having, that would probably be enough incentive for people.
Labedz: The best way to get more nerds like me would just be to advertise Birdwatch more. A non-money option could be things like spotlight features by Twitter on experts debunking misinformation in their fields. Like, “This month’s Birdwatch star is Dr. Jane Doe of the University of Wherever, who’s busting myths about wildfires. Here’s the top 10 things she wants you to know about safety, ecology, and more!” People can learn things, it’s good publicity on both sides, it’s simple and cute.
How do you think misinformation is affecting humanity?
Ryan: I think misinformation is a huge problem in our world, and governments are using it to their advantage. Everyone’s probably heard how it can be a “threat to our democracy,” and I agree with that. Most pieces of misinformation can be debunked easily with a Google search, but people seem to share it more than the truth because it reinforces their beliefs.
Labedz: For my opinion on misinformation in society as a whole, I think it’s a really big deal. It’s already doing major damage and has the potential to get a whole lot worse. I think more platforms need to crack down on it, but I recognize that that’ll be difficult to make them do, since misinformation is great for engagement and therefore quite profitable for platforms. I think media literacy needs to be emphasized for kids in schools and for the general public of all ages to help out on the individual level in addition to regulation at the platform level.
Segal: I think misinformation is affecting humanity like a drug. People want to believe it. Adding notes and comments won’t really help because people will just believe whatever they hear if it confirms their prior beliefs. And it does nothing about hateful content, which is as big or bigger a problem.
- Colombiacheck: “Argentine study did not demonstrate efficacy of ivermectin in COVID-19” (in Spanish)
- An Argentine study showing some positive preliminary results about the usage of the antiparasitic drug ivermectin to treat COVID-19 was blown way out of proportion on social media. Colombiacheck, along with their partners at the Argentine fact-checking organization Chequado, put the study in context and said more research needs to be done to definitively prove the drug’s safety and efficacy treating COVID-19.
- Australian Associated Press: “Video backfires with claimed German electric car fire” (in English)
- A video claiming to show an electric car bursting into flames in a German car park was actually shot in China. AAP confronted the falsehood by tying it to previously reported news articles, and by pointing out the Chinese characters on the car. AAP also noted that this claim builds on the grain of truth that two German parking garages did ban electric vehicles, but found no evidence of exploding cars and no evidence the German government had banned the vehicles.
From/for the community:
- “IFCN launches working group to address harassment against fact-checkers,” from Poynter. The IFCN announced two groups to help advise and address the increasingly frequent cases of harassment against fact-checkers around the world. One group will focus on responses while the other will audit the IFCN’s efforts and write quarterly reports on harassment incidents.
- “Fakes? No thanks,” from Demagog.pl. Polish fact-checking organization Demagog.pl has developed an english language version of its educational media literacy game, “Fakes? No thanks.” The game gives users techniques to spot false information then quizzes them with real-life examples to help user hone their skills.
- “Dubawa officially launches in Sierra Leone,” from Premium Times Nigeria. Dubawa is opening up shop in a fourth country where it will fact-check false and misleading content as well as conduct media literacy trainings to help spread the fact-checking movement to Sierra Leone.
From the news:
- “Jordan’s government used secretly recorded Clubhouse audio to spread disinformation,” from Rest of the World. Citing a report by the Stanford Internet Observatory, Jordanian military linked accounts took selectively edited audio from Clubhouse to disseminate pro-monarchy and pro-military propaganda on TikTok and Facebook shortly after the arrest of Prince Hamzah, the half brother of Jordan’s King Abdullah II.
- “Inside Facebook’s Data Wars,” from The New York Times. Tech columnist Kevin Roose delves into the internal debates within Facebook about how to address the slew of negative stories about the company emanating from use of its social media search and analytics tool, CrowdTangle.
- “The foreigners in China’s disinformation drive,” from the BBC. Foreign vloggers have shared videos in a seemingly coordinated way that push back on negative coverage of China on topics ranging from well documented reports of human rights abuses in Xinjiang to China’s handling of the COVID-19 pandemic. Some have even been featured on Chinese state television.
If you are a fact-checker and you’d like your work/projects/achievements highlighted in the next edition, send us an email at email@example.com by next Tuesday.
Thanks for reading Factually, and a special thank you to Alex for his contributions this week!