California is poised to lead the way by making a historic bill to crack down on AI powered companion chatbots. State senate majority leaders Steve Padilla and Josh Becker introduced Senate Bill 243 (SB 243) back in January. This chatbot industry bill is a step toward establishing heavy duty safety procedures by chatbot makers. The bill tries to incentivize some accountability for these companies by punishing them if their chatbots don’t stick to public safety standards.
SB 243 had tremendous momentum after a heartbreaking incident. The suicide of 19-year-old Adam Raine, who allegedly ended his life following multiple chats with OpenAI’s ChatGPT, helped shine a spotlight on this concern. The bill’s stated purpose is to stop future occurrences through careful and specific regulations that focus on user safety—in particular for children and young adults. If Governor Gavin Newsom signs SB 243 into law, California would be the first state to require robust protections for users from chatbot operators. This landmark legislation raises the bar for the entire industry.
The bill requires the AI platforms to send “periodic reminders”—at least once every three hours for users under 18—alerting them to potential harms. These independent alerts would have the added benefit of notifying users that they are engaging with an AI. They’ll prompt users to take breaks on lengthy conversations, too. This requirement is just one piece of a larger campaign to address the harm that could be done with widespread deployment of AI-enabled chatbots.
Moreover, SB 243 will empower individuals who believe they have been harmed by violations of these standards to file lawsuits against AI companies. For each violation, they could pursue injunctive relief and damages, which can go as high as $1,000 per violation, plus attorney’s fees. This potential civil cause of action would seek to deter AI chatbots from operating with greater transparency and accountability.
The bill establishes annual reporting and transparency measures for AI companies that provide companion chatbots. We’re seeing it even more so with large proprietary players such as OpenAI and Character. These provisions mitigate the risk by putting responsibility and accountability on the industry with robust measures that address pressing concerns related to AI interactions.
Even as SB 243 accomplished progress, it has had its stronger original requirements watered down through amendments while making its way through the legislative process. Now all eyes are on the state Senate, where the bill awaits its final vote. Once approved, it will be transmitted to Governor Newsom for his consideration and approval. If these new regulations are adopted, they will go into effect starting January 1, 2026. Advance reporting requirements start July 1, 2027.
In a recent statement, Senator Padilla emphasized the urgency of the bill: “I think the harm is potentially great, which means we have to move quickly.” He argued against the notion that innovation and regulation must be mutually exclusive, stating, “I reject the premise that this is a zero-sum situation.” He went on to claim that open transparency and disclosure would shed light on how often things go wrong with AI chatbots. “Don’t tell me that we can’t walk and chew gum,” he added.
California is in the midst of taking up SB 243. Meanwhile, lawmakers are considering SB 53, which would create detailed transparency reporting requirements for AI technologies. With mid-term elections around the corner, Silicon Valley companies are significantly increasing their financial contributions to political action committees (PACs). They are asking Congress to waive regulation of AI technologies even as AI regulation is quickly becoming a major focus in favor of oversight.
OpenAI has publicly opposed SB 243, most recently by publishing an open letter to Governor Newsom urging him to veto the bill. The firm is pressing him hard to reconsider the legislation. They are of the opinion that federal and international regulatory frameworks should provide additional flexibility.