OpenAI, the artificial intelligence company incubated at Stanford behind ChatGPT, is now on the other side of a wrongful death lawsuit initiated by the Raine family’s legal representatives. The family alleges that the AI chatbot played a major role in their 16-year-old son, Adam Raine’s suicide. They know it helped cause this preventable tragedy. The lawsuit highlighted what it terms the complicity of AI developers in protecting users’ health, especially that of minors.
Even more recently, OpenAI demanded the complete list of all attendees at Adam Raine’s memorial service. The Raine family is understandably troubled by this request. They interpret it as a way to gather data about other friends and family who were victimized as a result of Adam’s very unfortunate demise. The company can still try to subpoena these people, which could ramp up the years-long legal tug-of-war.
The Raine family’s lawsuit, filed in August, claims that ChatGPT’s interactions with Adam contributed to his decision to take his own life. In the months leading up to the incident, Adam’s use of the chatbot increased dramatically. By April, the month he died, his daily engagements had jumped from a few dozen to more than 300. Alarmingly, self-harm related content showed up in 1.6% of his conversations in January, jumping to 17% in April.
OpenAI has implemented several layers of safeguards to protect users. These actions involve routing people to crisis hotlines and steering private conversations to more secure models. The Raine family contends that these steps don’t go far enough. They claim that OpenAI gutted its safety precautions. This would occur if the group lifted suicide prevention material from its “disallowed content” list, which it plans to do in February 2025.
OpenAI would have you believe that it has prioritizing teen wellbeing at heart and has publicly committed itself to increasing safeguards.
“Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as directing to crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.” – OpenAI
The Raine family’s lawyers have characterized OpenAI’s actions as “intentional harassment.” They highlight the emotional impact this crisis has taken on the family’s wellbeing. They claim that the firm’s first priority is collecting personal data from mourning family members. This avoidable practice just further increases the pain of an already devastating experience.
As the lawsuit progresses, it raises critical questions about the responsibilities of AI companies regarding user safety and mental health support. OpenAI’s practices and protocols will be subjected to extreme scrutiny as this case continues winding its way through the courts.