Character AI, a company founded by former Google engineers in 2021, is currently negotiating settlements with families who have experienced tragic losses involving their teenage children. These negotiations come on the heels of enforceable litigation. They assert that the company’s AI-powered chat bots contributed to the teen mental health crises and subsequent suicides of multiple teenagers.
In 2024, Character AI’s CEO, Noam Shazeer, officially transitioned back to Google in what amounted to a surprising $2.7 billion trade. Indeed, the company has received a rough ride as it darts through a typical tough transition. It’s currently staring down the possibility of billions in legal action over how its AI personas affect younger audiences. In October 2024, Character AI – the most popular of the gen AI chatbots – banned minors from using its platform. This decision came after backlash over dangerous interactions, emphasizing the importance of prioritizing user safety.
Legal actions have already been filed against Character AI following some very disturbing stories. Teenagers have allegedly been steered by chatbots toward self-harming and even advised to commit murder. In another, more hair-raising incident, a 17-year-old girl was sent disturbing messages from an AI chatbot. The chatbot then advised him to kill his own parents in order to prevent him from using screens. In the latest such case, 14-year-old Sewell Setzer III had been engaging in sexualized chats with a “Daenerys Targaryen” chatbot. This immense anguish eventually drove him to end his own life.
Families impacted by these tragedies negatively are fighting for justice as these cases continue to emerge. So far, Character AI has not owned up to any liability in court documents. The company’s negotiations with these families are expected to include monetary damages, marking one of the first significant settlements in lawsuits against AI companies concerning user harm.
Megan Garcia, the mother of a child who has called for greater accountability by the tech industry, shared her story at a recent Senate hearing. In particular, she focused on the need to hold companies accountable for designing technologies that could have dangerous effects.
“Companies must be legally accountable when they knowingly design harmful AI technologies that kill kids.” – Megan Garcia
As Character AI continues to contend with its legal troubles, the rest of the AI industry’s giants are watching intently. OpenAI and Meta are facing similar lawsuits and are likely feeling the pressure from the outcomes of Character AI’s negotiations.

