Congress Considers AI Moratorium That Could Shape Future Regulations

In the meantime, Congress is considering an unprecedented proposal that would make a profound difference in the artificial intelligence regulatory landscape in the United States. The “Big Beautiful Bill” would enact an AI moratorium. This new provision would prevent states from enacting AI regulations of their own for at least a decade. The new move…

Lisa Wong Avatar

By

Congress Considers AI Moratorium That Could Shape Future Regulations

In the meantime, Congress is considering an unprecedented proposal that would make a profound difference in the artificial intelligence regulatory landscape in the United States. The “Big Beautiful Bill” would enact an AI moratorium. This new provision would prevent states from enacting AI regulations of their own for at least a decade. The new move could deprive developers and consumers of benefits for which they’ve worked hard to achieve. It has launched a contentious fight between legislators, business interests, and consumer advocates.

Sam Altman, the co-founder and CEO of OpenAI, asserts that the utility of artificial intelligence will accelerate markedly over the next two years. He cautions that this fast-paced development requires a nuanced approach, especially when it comes to regulatory frameworks. The AI moratorium proposal first appeared in May in the form of a budget reconciliation bill. Though it was mainly concerned with filling the gaps in broadband funding, its impact goes well beyond just broadband access.

Yet the proposal has received unusual attention not for its merit in expanding broadband, but rather its attempt to preempt state-level regulations. Senator Ted Cruz just updated the moratorium. He made special mention of the new $500 million broadband prerequisite which is specifically tied to Broadband Equity Access and Deployment (BEAD) funding. Under this provision, states must comply with the AI moratorium to receive a share of the $42 billion BEAD program.

Critics of the proposal, including many Democrats and some Republicans, express concerns that a federal moratorium would hinder states’ ability to protect their citizens from potential harms associated with AI technologies. Senator Marsha Blackburn from Tennessee has made her opposition clear. She thinks that states should have the authority to keep their residents and creative industries safe from AI’s potential harms.

Senator Josh Hawley—also from Missouri—would voice similar concerns, focusing specifically on states’ rights. Now he’s joining forces with Senate Democrats to strip the AI moratorium from the bipartisan legislation. He is passionate about the role of local governance, especially in this rapidly evolving and disruptive technological landscape.

Senator Maria Canwell from Washington has likewise assailed the moratorium language that Cruz has prepared. She writes that it has “grave consequences” for the country’s competitiveness against China in the global race to dominate AI.

Dario Amodei, CEO of Anthropic, has been one of the loudest critics of the moratorium. He makes an interesting point that a ten-year limit is “way too blunt an instrument.” He emphasizes the importance of developing more nuanced approaches that are able to keep pace with the rapid developments occurring within AI technology.

A recent survey shows that the public overwhelmingly agrees that AI is developing too fast without proper government regulation. Approximately 60% of U.S. adults are concerned the government will go too easy on the industries it regulates. In like manner, 56% of experts in AI express this concern. The minimization goal of most existing state laws goes only as far as consumer protection against specific identified harms such as deepfakes and privacy discrimination. States such as Alabama, California, and Texas have already passed laws aimed at misleading AI-generated content designed to interfere with elections.

Chris Lehane, chief global affairs officer at OpenAI, argues against the current patchwork approach to AI regulation, stating that it “isn’t working and will continue to worsen if we stay on this path.” He stresses that a unified regulatory framework is urgent — especially because AI technologies are advancing at a breakneck pace.

Altman shares similar concerns about regulatory fragmentation. He notes that “a patchwork across the states would probably be a real mess and very difficult to offer services under.” His experience highlights the real issues that crude transporters face by confusing standards that change whenever you cross a state border.

Senate Republicans such as Cruz and Senate Majority Leader John Thune have been advocating for a more “light touch” approach to AI governance. They object to the demands for a better-balanced approach. They claim that increased federal control would halt innovative progress and development within their field.

Emily Peterson-Cassin, organizer Direct Action and Research Center=’_blank’ consumer rights advocate On the preemption argument, she responds by noting that companies regularly operate successfully across states with different regulations. She asserts that “the patchwork argument is something that we have heard since the beginning of consumer advocacy time,” suggesting that businesses can effectively adapt to diverse regulatory environments.

This fight over an AI moratorium comes against the backdrop of extreme technological acceleration. Altman cautions that “AI is advancing too head-spinningly fast,” implying that any regulatory framework must be adaptable to keep pace with innovation. He worries about months of legislative negotiations to create all the specifics of new regulations only to have them become obsolete before they go into effect.

Nathan Calvin, technology policy expert, calls for Congress to take decisive action on AI safety legislation. He cautions if measures go too far, that could be harmful as well. He states, “If the federal government wants to pass strong AI safety legislation, and then preempt the states’ ability to do that, I’d be the first to be very excited about that.”

As implementation discussions continue around the proposed moratorium, stakeholders are still unsure how to approach the most effective balance between innovation and consumer protection. The outcome of this legislative battle may set critical precedents for how artificial intelligence is governed across the United States for years to come.