State lawmakers have filed hundreds of bills to address artificial intelligence’s potential harms for child safety, learning, elections and more.
Those laws wouldn’t be possible for 10 years under the massive policy bill moving through Congress.
The Trump-backed tax and spending bill reached the Senate with a 450-word section prohibiting states from enforcing any law or regulation limiting or restricting AI for the next 10 years.
Lawmakers who support the provision said that a patchwork of state laws stifle innovation, and get in the way of U.S. competition with China. But the AI section drew bipartisan outcry after the House’s May 22 party-line vote, with hundreds of state leaders and even representatives who voted for the bill criticizing it as problematic.
“I would have voted NO if I had known this was in there,” Rep. Marjorie Taylor Greene, R-Ga., wrote on X June 3.
Examples of AI’s potential harms are not hard to find. In February 2024, a 14-year-old Florida boy died by suicide after prolonged interaction with a generative AI bot. Researchers report rising popularity of AI-driven apps that virtually undress people, allowing sharing of non-consensual nude photos, including of minors. And educators are struggling with how to sensibly utilize the technology without compromising learning.
Amid backlash from states, senators deliberating over the Republican-titled “One Big Beautiful Bill” have since proposed revising the language, tying it to federal broadband funding.
The debate makes for a confusing swirl of questions about the provision’s aim, its potential effects and its survival outlook. Here are four things to know:
What is the bill’s AI provision and what does it have to do with broadband access?
The House-approved reconciliation bill said states can’t enforce state laws “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce” for 10 years. It exempted laws that make it easier to procure, deploy and operate AI systems.
The Senate Committee on Commerce, Science and Transportation proposed revising the provision June 5 to tie it to Broadband, Equity, Access and Deployment program, or BEAD, funding.
Established under the 2021 Infrastructure Investment and Jobs Act, the $42 billion program represents the largest federal broadband investment to date and provides 56 states and territories with grants to help people access broadband in “communities of color, lower-income areas and rural areas.”
States that refuse to impose a moratorium will not get those dollars. Amba Kak, co-executive director of AI Now Institute, an independent research institute, said the change could leave states in an uncomfortable dilemma, choosing between broadband dollars and the power to protect their constituents from AI harm.
“I can imagine that for lawmakers, Republican or Democrat, whose districts rely on BEAD funding for broadband access to their rural communities, it’s really a strange bargain,” Kak said.
What does China have to do with this?
In 2025, state lawmakers filed more than 1,000 bills related to AI, and 28 states and the Virgin Islands adopted or enacted at least 75 new measures, according to the National Conference of State Legislatures.
American tech makers are racing with China to develop AI tools and federal leaders have taken steps to try to spur greater innovation. So when a Chinese startup released the AI model DeepSeek R1 in December, the stock market reacted and tech watchers said U.S.-based industry leaders were caught blindsided.
Sam Altman, CEO of U.S.-based Open AI, lauded the competition in January. In May, he said it is difficult for his company to figure out “how to comply with 50 different sets of regulations.”
Sen. Ted Cruz, R-Texas, chair of the Senate Committee on Commerce, Science and Transportation, backs the reconciliation bill’s AI provision on grounds that state laws could slow development that will keep the U.S. competitive.
“If we have a 50-state patchwork, you know what that’ll do? That’ll drive AI development out of America to other countries and it will cause America to lose the AI race to China,” Cruz said June 5 on CNBC.
Rep. Jay Obernolte, R-Calif., who co-chairs the House’s Bipartisan Artificial Intelligence Task Force, similarly called an extensive collection of state laws “the fastest way to secure Chinese dominance of AI.”
Pete Furlong, Center for Humane Technology lead policy researcher, disagreed, saying that if AI is designed with safety in mind, it would lead to models with fewer harms that promote trust and long-term adoption.
“Safety and innovation are not opposing forces,” he said.
What are state lawmakers saying?
In a June 3 letter, 260 state lawmakers from both parties in all 50 states voiced “strong opposition” to the AI provision. They said a moratorium on state regulation would “wipe out” laws on consumer transparency, government acquisition of new technology, patient protection in healthcare systems, and artist and creator protection.
There seems to be popular support for AI regulation as well. A May poll by Common Sense Media and Echelon Insights found that out of 1,022 voters, 73% said they want states and the federal government to regulate AI and 59% oppose a 10-year moratorium on state AI regulation.
It has also drawn opposition from the National Association of Attorneys General, the North American Securities Administrators Association, which focuses on investor protection, and a group of around 140 organizations including tech workers, teachers, artists and civil society groups.
Kak said the idea that state laws impede innovation is “based on a caricature of state regulation as being burdensome and chaotic and states going rogue.”
What AI legislation could be affected?
A PolitiFact analysis of the National Conference of State Legislatures’ data on AI-related bills found that most of the laws enacted concern health use, government use, criminal use, effect on labor/employment, elections and judicial use.
Many bills that are still pending aim to criminalize child pornography generated by AI.
Some states have also introduced legislation that would help voters identify AI-generated content during elections. Wisconsin, for example, enacted a law requiring disclosures in political advertisements that contain AI-generated content.
But the measure’s implication extends beyond legislation that seeks to address existing harms.
“The ten year scope of the moratorium would prevent states from addressing emergent harms that we’re not even aware of yet,” Furlong said.
Gaia Bernstein, technology, privacy and policy professor of law at Seton Hall, said, if enacted, the effect on laws regulating AI algorithms, privacy and education could have irreversible consequences. Ten years is a long time in the life of a child.
“When you’re thinking about kids, that’s a whole generation,” she said. “You can’t undo this experiment.”
This fact check was originally published by PolitiFact, which is part of the Poynter Institute. See the sources for this fact check here.
