September 10, 2020

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here

The theme in the debate over content moderation

This week brought two new visions for how to reform Section 230 of the Communications Decency Act. The law, which shields large tech companies from legal liability for content posted by third parties on their platforms, has drawn fire from politicians in both parties.

Senate Republicans put forward a bill that would curtail a tech company’s liability protections if a platform restricted or removed a piece of content without explicitly stating the company policy it violated.

A report from New York University’s Stern Center for Business and Human Rights offers a more tailored approach. Report author Paul Barrett advocates a three-pronged strategy that would tweak the current law to incentivize social media companies to crack down on harmful mis- and disinformation. It would keep the current law in place, but create conditions companies must meet to qualify for the liability shield, and create an independent regulatory agency to enforce these new regulations.

While these two approaches are substantially different, both of them hit on a theme that has run through the debate over content moderation decisions by the platforms: the need for more transparency from social media companies about how they make these decisions. Lawmakers, fact-checkers and researchers have all advocated for more insight into the process.

In April, a group of researchers and nonprofits wrote an open letter to tech companies urging them to preserve and make available content removed during the COVID-19 infodemic. The letter argued this is crucial to inform future policy making.

After attaching a “Get the Facts” label to one of President Donald Trump’s tweets in May, fact-checkers were left with a series of questions about Twitter’s decision-making process. They urged Twitter to be more transparent about how it chooses to apply labels and who is making those decisions.

Senators Brian Schatz (D-Hawaii) and John Thune (R-S.D.) introduced a bill in June to force companies to be more transparent about these decisions.

Without this kind of transparency, social media companies may be opening themselves up to accusations of arbitrary enforcement or even bias. Republicans, including Trump, have already accused the companies of having an anti-conservative bias.

Both Facebook and Twitter do have transparency websites that offer additional information about content moderation, so progress is being made. However, a June Knight Foundation poll found that 80% of respondents don’t trust social media companies to self regulate. It also found 81% favored the creation of an independent agency to regulate Big Tech platforms, echoing the final recommendation of Barrett’s report.

– Harrison Mantas, IFCN

. . . technology

  • Conspiracy theory expert Joe Uscinski from the University of Miami wrote in the Toronto Globe and Mail that criticism of social media platforms and their role in spreading conspiracy theories like QAnon “has in some cases become disproportionate to the crisis.”
  • Facebook CEO Mark Zuckerberg talked with Axios in an interview on HBO about how the platform plans to approach misinformation surrounding a future COVID-19 vaccine, saying it will continue to work with health authorities to identify and take down misinformation that could pose a risk of harm.
    • “All the challenging questions about anti-vaxx and a lot of misinformation come from cases where there has been some instance of harm but that people are kind blowing out of proportion or saying it’s more prevalent or common than it actually is,” he told Axios’ Mike Allen.

. . . politics

  • Facebook last week said it would block all political advertising in the week before the U.S. election in November in an effort to stem election misinformation. It’s one of several steps that Zuckerberg announced in a blog post.
    • “I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims,” he wrote.
    • A New York Times story questioned the move’s likely effectiveness in reducing voter confusion.
  • From the not-all-misinformation-is-online department: CNN reported that a political group tied to Kanye West has sent absentee ballot applications with inserts that had fake headlines about Democrats to residents in Pennsylvania and Minnesota.
    • The insert in the Pennsylvania mailer contained an image of Joe Biden holding an edition of Politico with a headline saying Biden supports sanctuary cities, CNN said. A Politico editor tweeted that the headline was fake and that the paper didn’t even publish on the date shown.

. . . science and health

  • Wall Street research reports represent a new front in the misinformation wars, CNN reported this week. In one of these reports, the network said, an analyst at a research firm cited a doctor who has voiced unrealistically optimistic projections about  COVID-19 and herd immunity.
    • “While coronavirus misinformation is ubiquitous on social media, there’s an expectation that Wall Street research is more reliable than the kind of information that winds up in most people’s Facebook feeds,” CNN’s Cristina Alesci and Casey Tolan wrote.
  • During a COVID-19 briefing Tuesday, Ohio Gov. Mike DeWine refuted a “crazy, ridiculous internet rumor” that he authorized secret FEMA concentration camps where children would be separated from their parents, WCPO reported.
    • QAnon researcher Travis View tweeted in response: “Seems like public officials are being forced to spend a lot of time rebutting dumb conspiracy theories because they spread so far and fast.”

 

Candidates for office, particularly challengers, frequently use the votes of incumbents or former lawmakers as ammunition against them. These usually show up in ads.

Sometimes these claims are true. Sometimes they’re true, but devoid of context. And sometimes they’re just wrong. Those usually take a legislative action that did happen and contort it into something it wasn’t. The fact-checker then needs to untangle the process.

Clara Hendrickson of The Detroit Free Press and (Poynter-owned) PolitiFact did just that in a fact-check of a claim by the Republican challenging Rep. Elissa Slotkin (D-Mich.) that she voted against a measure to condemn cyber attacks by the Chinese.

Slotkin’s challenger, Paul Junge, based his claim on a June 30 procedural vote to move forward with a debate on an infrastructure bill, which Slotkin supported. How did he turn that into a vote against condemning China? Another lawmaker said that if the House didn’t move forward with the infrastructure bill, he’d offer the bill condemning Chinese cyberattacks. But the House did vote to move forward. So the bill condemning China was not the question before the House.

What we liked: Hendrickson explained the parliamentary process to show why Junge’s claim was wrong. She also provided context by noting that Slotkin actually voted to condemn China in an amendment to another measure, the Defense authorization bill.

– Susan Benkelman, API

 

  1. Russia’s 2020 election manipulation is shaping up to look a lot like 2016, Axios’ Sara Fischer reported.
  2. MediaWise has launched its “Prep for the Polls” text message voter information course. Users receive 10 days of text messages with information to help them get ready to vote this fall.
  3. The Lawfare Podcast looked at the prevalence of deceptively edited videos, or “cheap fakes,” and discussed their possible impact on the 2020 election.
  4. The Arizona State University News Co/Lab has launched a free online course in digital media literacy called “Mediactive: How to participate in our digital world.”
  5. The BBC reports that misinformation about COVID-19 is fueling opposition to testing in the northern Indian state of Pubjab.

Thanks for reading! Feel free to send feedback and suggestions to factually@poynter.org. If this newsletter was forwarded to you, or if you’re reading it on the web, you can subscribe here.

Until next week,

Harrison and Susan

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Harrison Mantas is a reporter for the International Fact-Checking Network covering the wide world of misinformation. He previously worked in Arizona and Washington D.C. for…
Harrison Mantas

More News

Back to News