California’s New AI Safety Bill Awaits Governor’s Approval

California’s ambitious efforts to regulate artificial intelligence (AI) are reaching a critical juncture as Senator Scott Wiener’s proposed legislation, Senate Bill 53 (SB 53), sits on Governor Gavin Newsom’s desk. We applaud the introduction of this bill to address the increasingly urgent calls for AI safety, efficacy and transparency in a quickly shifting technological landscape….

Lisa Wong Avatar

By

California’s New AI Safety Bill Awaits Governor’s Approval

California’s ambitious efforts to regulate artificial intelligence (AI) are reaching a critical juncture as Senator Scott Wiener’s proposed legislation, Senate Bill 53 (SB 53), sits on Governor Gavin Newsom’s desk. We applaud the introduction of this bill to address the increasingly urgent calls for AI safety, efficacy and transparency in a quickly shifting technological landscape. SB 53’s goal is to bring much needed accountability to this dangerous industry. It addresses these problems by enacting safeguards for workers at AI research facilities and creating a publicly-run cloud computing cluster.

SB 53 includes key provisions explicitly aimed at increasing safety oversight in the development of AI. One of its most exciting features, though, is the provision for new protected channels. These channels provide a direct line for employees to surface potential safety concerns to the relevant government officials. This new initiative seeks to develop an atmosphere where staff members are protected in expressing their concerns without the threat of retribution.

Besides setting important reporting requirements in motion, SB 53 would help provide for the creation of CalCompute. This AI research-related, state-operated cloud computing cluster will provide numerous resources and capabilities for AI research. Support for this ambitious initiative will help ensure that California remains the global leader in responsible and equitable AI innovation. Simultaneously, it will focus on the safety of all parties.

>The bill is not as strict as the prior bill, SB 1047. Governor Newsom vetoed SB 1047 because he was concerned that it would have a chilling effect on innovation in the space. We know that SB 1047 focuses primarily on liability. Unlike SB 53, which emphasizes transparency and seeks to set up evidence-based safety protocols,

Wiener’s bill highlights California’s important role in taking the lead on AI safety. First, he makes the case that the state must not stand in the way of innovation. He stated, “This is an industry that we should not trust to regulate itself or make voluntary commitments.” His view reflects a growing sentiment among lawmakers that proactive measures are necessary to safeguard public interests amid rapid AI advancements.

Though the bill would be a significant step forward for limiting big tech’s harmful behavior, reactions from big tech have been lukewarm at best. Anthropic has publicly endorsed SB 53. For his part, Jim Cullinan, a Meta spokesperson, reasserted the company’s support for AI regulations that strike the right balance between essential safeguards and the innovation that fuels our economy. Cullinan called SB 53 a good first step.

OpenAI is staunchly against the prospect of regulation at the state level. They want a model where labs only need to comply with broad federal standards. This point of view illustrates an important tension within the technology industry. It centers on how much oversight AI development actually requires.

While SB 53 awaits Newsom’s decision, it’s important to acknowledge that its targeted at the tech industry’s bigger players. At the same time, startups have played a diminished role in policy conversations this year. Most start-ups should be able to flourish in today’s environment. Because they have to contend with less regulatory burdens, they are able to prioritize growth and innovation.

The politics of AI regulation speaks to the larger trend at the federal level, too. The current administration Trump has made a pendulum swing away from the Biden administration’s focus on the existential safety of AIs. Instead, it is doubling down on a pro-growth agenda. This upset has led state legislators, including Wiener, to raise their eyebrows. Civil rights advocates express skepticism about the federal government’s commitment to developing impactful AI safety guardrails.

Wiener has been vocal about the challenges presented by the federal landscape. His strong message to states is to lead the way in addressing the dangers that AI presents. He appreciates that SB 53 tackles some key AI safety concerns. As he explains, it won’t address all the potential risks associated with the technology.