Balancing Regulation and Innovation in AI: Insights from Adam Billen

Adam Billen, vice president of public policy at Encode AI, recently addressed the intricacies of artificial intelligence regulation during his appearance on TechCrunch’s podcast, Equity. His discussion highlighted significant progress in state and federal AI legislation. He highlighted California’s recently passed SB 53, which will make large AI labs publicly accountable for their safety and…

Lisa Wong Avatar

By

Balancing Regulation and Innovation in AI: Insights from Adam Billen

Adam Billen, vice president of public policy at Encode AI, recently addressed the intricacies of artificial intelligence regulation during his appearance on TechCrunch’s podcast, Equity. His discussion highlighted significant progress in state and federal AI legislation. He highlighted California’s recently passed SB 53, which will make large AI labs publicly accountable for their safety and security protocols. This legislation aims to address the existential dangers associated with AI technologies. It largely addresses the need to prevent cyberattacks on our critical infrastructure and to stop bio-weapons from ever being created.

In his discussion, Billen expressed optimism about a forthcoming federal AI standard. His approach, he says, is to prevent a patchwork of AI rules from taking hold across the states, providing a more even playing field. He warned that this new federal standard would preempt state laws already on the books. This set off a host of concerns about the balance of local vs. national regulatory supremacy.

“Are bills like SB 53 the thing that will stop us from beating China? No,” Billen stated during the podcast. He emphasized that the ongoing race with China in AI technology is significant, and American policymakers must enact regulations that support progress while maintaining competitive advantages.

SB 53 also becomes a first-of-its-kind bill in the nation to directly create safety protocols in the development of AI technologies. In particular, it aims at the most powerful AI laboratories. The intent is to make sure that they provide information on AI model risk mitigation practices. Billen noted, “Companies are already doing the stuff that we ask them to do in this bill,” indicating that many firms are already undertaking necessary safety measures.

The importance of the bill cannot be overstated in light of recent geopolitical developments. In April 2025, the US administration deepened these export restrictions on advanced AI chips to China. Simultaneously, companies up Qualcomm’s supply chain, like Nvidia and AMD, received special exemptions to sell certain chips only after the fact. This delicate regulatory juggle is a testament to the need for clear policy in an era of intense international competition.

Billen pointed out ways that some AI companies could lower safety thresholds in response to competitive pressures. “They do safety testing on their models. They release model cards.… or are they beginning to cut back on things at some firms. Yes. And that’s why bills like this one… are so important,” he added. This leaves regulators with key questions. They need to be careful that their innovation doesn’t come at the expense of public safety.

Billen expressed his disappointment with these moves to kill state-level bills. These bills take important steps to address areas like deepfakes, algorithmic discrimination and keeping our children safe. On the latter point, he made the case that local rules matter a lot. They are able to address local issues that federal action tends to ignore.

In a move to further this effort legislatively, Senator Ted Cruz of Texas recently introduced the SANDBOX Act. This act would provide a new process for AI companies to apply for waivers, giving them temporary relief from specific federal regulations. Billen warned against relying too heavily on such waivers, calling for strong regulations to hold companies accountable to their stated safety pledges.

“The reality is that policymakers themselves know that we have to do something,” Billen said. “They know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation — which I do care about — while making sure that these products are safe.” This recognition that smart regulation is necessary captures a developing agreement by industry executives and legislators across the aisle.

Even with the hurdles around AI regulation, Billen is hopeful regarding the continued collaborative work between industry and policymakers. In his eyes, SB 53 is a shining example of democracy at work. It has promoted early engagement with stakeholders to develop regulations that promote both safety and innovation.

If you believe that SB 53 is a good stand-in for all of the other state bills on AI and its potential harms I would heartily disagree. That’s almost certainly a terrible idea. He explained, “This bill is designed for a particular subset of things.” His remarks really emphasize this point of the importance of having flexible approaches in regulatory frameworks, not a sort of cookie-cutter approach.

As discussions continue regarding the future of AI regulation, Billen’s insights shed light on the importance of balancing innovation with safety measures. The evolving landscape of AI presents challenges and opportunities, and coordinated efforts will be essential to navigate this complex terrain effectively.