California State Senator Scott Wiener made some big changes to his latest bill, SB 53. This bill would establish important transparency requirements on the world’s most powerful artificial intelligence (AI) companies. The proposed legislation mandates that these companies publish safety and security protocols, as well as issue reports following safety incidents. If it passes, California would be the first state to require such things. Large AI developers such as OpenAI, Google, Anthropic, and xAI will be most directly affected.
The new SB 53 draws on lessons learned from California’s AI policy work group report. Industry continues to resist the notion that they be required to disclose basic, vital information about their systems. The report identified a gap in transparency and recommended that companies be held accountable for their safety measures, advocating for a “robust and transparent evidence environment.”
Senator Wiener stated, “The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be.” Details matter, and this commitment is a testimony to the Administration’s willingness to grapple with the complexities of AI safety regulation.
Those changes to SB 53 are viewed as a more tempered approach than earlier proposals. Senator Wiener last session introduced SB 1047, which would have required similar for AI model developers. Nevertheless, despite the momentum behind it, Governor Gavin Newsom ended up vetoing the bill. Following that veto, Governor Newsom had been engaging closely with AI leaders such as Fei Fei Li. Combined, these two efforts made a policy dream team focused on setting a strategic direction for California’s AI safety endeavors.
Support for SB 53 has been strengthened by the endorsements from a wide range of stakeholder interests within the AI community. Nathan Calvin, Vice President of State Affairs for the nonprofit AI safety group Encode, expressed that increased transparency is crucial. He stated, “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.” His organization and other AI safety advocates like them have for years warned about the dangers of AI and called for stronger accountability.
Geoff Ralston, former president of startup incubator Y Combinator reinforced these concerns in a Harvard interview earlier this year, calling for development toward “safe AI.” He remarked, “Ensuring AI is developed safely should not be controversial — it should be foundational.” This view underscores the urgent need for thoughtful regulation in an industry that has outpaced government for years with little to no regulation.
The California legislature’s proactive move towards AI safety arrives as lawmakers in states like Indiana and Georgia introduce their own AI regulations. And six months after the RAISE Act was introduced, New York Governor Kathy Hochul is still very much engaged. This bill will help set safety standards for AI technologies. This trend indicates a growing recognition of the potential risks associated with unregulated AI development and the need for proactive measures.
Senator Wiener’s amendments have drawn intense scrutiny. It’s the substance of their content and their potential impact upon helping to shape the future of AI governance that makes them particularly remarkable. We believe the draft bill is a balanced model of responding to the public’s concerns while promoting the positive innovation of the technology sector.
As the conversation goes deeper on SB 53, its impact has the potential to reach far beyond the state lines of California. If successful, it may set a precedent for other states and countries seeking to enhance transparency and accountability in AI development.