March 3, 2021

A law credited with birthing the internet — and with spurring misinformation — has drawn bipartisan ire from lawmakers who are vowing to change it.

Section 230 of the Communications Decency Act shields internet platforms from liability for much of what its users post.

Both Democrats and Republicans point to Section 230 as a law that gives too much protection to companies like Facebook, YouTube, Twitter, Amazon and Google — with different reasons.

Former President Donald Trump wanted changes to Section 230 and vetoed a military spending bill in December because it didn’t include them. President Joe Biden has said that he’d be in favor of revoking the provision altogether. Biden’s pick for commerce secretary said she will pursue changes to Section 230 if confirmed.

There are several bills in Congress that would repeal Section 230 or amend its scope in order to limit the power of the platforms. In response, even tech companies have called for revising a law they say is outdated.

“In the offline world, it’s not just the person who pulls the trigger, or makes the threat or causes the damage — we hold a lot of people accountable,” said Mary Anne Franks, a law professor at the University of Miami. “Section 230 and the way it’s been interpreted essentially says none of those rules apply here.”

How did Section 230 come to be, and how could potential reforms affect the internet? We consulted the law and its experts to find out. (Have a question we didn’t answer here? Send it to truthometer@politifact.com.)

What is Section 230?

Donna Rice Hughes, of the anti-pornography organization Enough is Enough, meets reporters outside the Supreme Court in Washington Wednesday, March 19, 1997, after the court heard arguments challenging the 1996 Communications Decency Act. The court, in its first look at free speech on the Internet, was asked to uphold a law that made it a crime to put indecent words or pictures online where children can find them. They struck it down. (AP Photo/Susan Walsh)

Congress passed the Communications Decency Act as Title V of the Telecommunications Act of 1996, when an increasing number of Americans started to use the internet. Its original purpose was to prohibit making “indecent” or “patently offensive” material available to children.

In 1997, the Supreme Court struck down the Communications Decency Act as an unconstitutional violation of free speech. But one of its provisions survived and, ironically, laid the groundwork for protecting online speech.

Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That provision, grounded in the language of First Amendment law, grants broad legal protections to websites that host user-generated content. It essentially means they can’t be sued for libel or defamation for user posts. Section 230 is especially important to social media platforms, but it also protects news sites that allow reader comments or auction sites that let users sell products or services.

RELATED TRAINING: Pay Attention: Legal Issues and Your Media Company

“Section 230 is understood primarily as a reaction to state court cases threatening to hold online service providers liable for (possible) libels committed by their users,” said Tejas Narechania, an assistant law professor at the University of California-Berkeley.

Section 230 changed that. For example, if a Facebook user publishes something defamatory, Facebook itself can’t be sued for defamation, but the post’s original author can be. That’s different from publishers like the New York Times, which can be held liable for content they publish — even if they didn’t originate the offending claim.

There are some exceptions in Section 230, including for copyright infringement and violations of federal and state law. But in general, the provision grants social media platforms far more leeway than other industries in the U.S.

Why does it matter?

Sen. Ron Wyden (D-Ore.), one of the authors of Section 230, in 2021. (Demetrius Freeman/The Washington Post via AP, Pool)

Section 230 is the reason that you can post photos on Instagram, find search results on Google and list items on eBay. The Electronic Frontier Foundation, a nonprofit digital rights group, calls it “the most important law protecting internet speech.”

Section 230 is generally considered to be speech-protective, meaning that it allows for more content rather than less on internet platforms. That objective was baked into the law.

In crafting Section 230, Sen. Ron Wyden, D-Ore., and Rep. Chris Cox, R-Calif., “both recognized that the internet had the potential to create a new industry,” wrote Jeff Kosseff in “The Twenty-Six Words That Created the Internet.”

“Section 230, they hoped, would allow technology companies to freely innovate and create open platforms for user content,” Kosseff wrote. “Shielding internet companies from regulation and lawsuits would encourage investment and growth, they thought.”

Wyden and Cox were right — today, American tech platforms like Facebook and Google have billions of users and are among the wealthiest companies in the world. But they’ve also become vehicles for disinformation and hate speech, in part because Section 230 left it up to the platforms themselves to decide how to moderate content.

Until relatively recently, most companies took a light touch to moderation of content that’s not illegal, but still problematic. (PolitiFact, for example, participates in programs run by Facebook and TikTok to fight misinformation.)

“You don’t have to devote any resources to make your products and services safe or less harmful — you can solely go towards profit-making,” said Franks, the law professor. “Section 230 has gone way past the idea of gentle nudges toward moderation, towards essentially it doesn’t matter if you moderate or not.”

Without Section 230, tech companies would be forced to think about their legal liability in an entirely different way.

“Without Section 230, companies could be sued for their users’ blog posts, social media ramblings of homemade online videos,” Kosseff wrote. “The mere prospect of such lawsuits would force websites and online service providers to reduce or entirely prohibit user-generated content.”

Has the law changed?

The law has changed a little bit since 1996.

Section 230’s first major challenge came in 1997, when America Online was sued for failing to remove libelous ads that erroneously connected a man’s phone number to the Oklahoma City bombing. The U.S. Court of Appeals for the Fourth Circuit ruled in favor of AOL, citing Section 230.

“That’s the case that basically set out very expansive protection,” said Olivier Sylvain, a law professor at Fordham University. “It held that even when an intermediary, AOL in this case, knows about unlawful content … it still is not obliged under law to take that stuff down.”

That’s different from how the First Amendment treats other distributors, such as booksellers. But the legal protections aren’t limitless.

In 2008, the Ninth Circuit appeals court ruled that Roommates.com could not claim immunity from anti-discrimination laws for requiring users to choose the preferred traits of potential roommates. Section 230 was further weakened in 2018 when Trump signed a package of bills aimed at limiting online human trafficking.

The package created an exception that held websites liable for ads for prostitution. As a result, Craigslist shut down its section for personal ads and certain Reddit groups were banned.

What reforms are being considered?

Sen. Joshua Hawley (R-Mo.) is one of several senators who has introduced a bill to modify or repeal Section 230. (Graeme Jennings/Pool via AP)

In 2020, following a Trump executive order on “preventing online censorship,” the Justice Department published a review of Section 230. In it, the department recommended that Congress revise the law to include carve-outs for “egregious content” related to child abuse, terrorism and cyber-stalking. The review also proposed revoking Section 230 immunity in cases where a platform had “actual knowledge or notice” that a piece of content was unlawful.

The Justice Department review came out the same day that Sen. Josh Hawley, R-Mo., introduced a bill that would require companies to revise their terms of service to include a “duty of good faith” and more transparency about their moderation policies. A flurry of other Republican-led efforts came in January after Twitter banned Trump from its platform. Some proposals would make Section 230 protections conditional, while others would repeal the provision altogether.

Democrats have instead focused on reforming Section 230 to hold platforms accountable for harmful content like hate speech, targeted harassment and drug dealingOne proposal would require platforms to explain their moderation practices and to produce quarterly reports on content takedowns. The Senate Democrats’ SAFE Tech Act would revoke legal protections for platforms where payments are involved.

That last proposal is aimed at reining in online advertising abuses, but critics say even small changes to Section 230 could have unintended consequences for free speech on the internet. Still, experts say it’s time for change.

“Section 230 is a statute — it is not a constitutional norm, it’s not free speech — and it was written at a time when people were worried about electronic bulletin boards and newsgroups. They were not thinking about amplification, recommendations and targeted advertising,” Sylvain said. “Most people agree that the world in 1996 is not the world in 2021.”

This article was originally published by PolitiFact, which is part of the Poynter Institute. It is republished here with permission. See the sources for these facts checks here and more of their fact-checks here.

More about Section 230

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News