U.S. Teen Suicide Cases Prompt Lawsuits Against ChatGPT Maker Amid Rising AI Chatbot Usage

Concerns regarding the safety of artificial intelligence (AI) chatbots have escalated in the United States following tragic incidents involving teenagers and their interactions with these technologies. It’s an all-too-real concern, as at least two families have already filed lawsuits against OpenAI, creator of ChatGPT. Their children, Adam Raine and Amaurie Lacey, respectively, tragically took their…

Lisa Wong Avatar

By

U.S. Teen Suicide Cases Prompt Lawsuits Against ChatGPT Maker Amid Rising AI Chatbot Usage

Concerns regarding the safety of artificial intelligence (AI) chatbots have escalated in the United States following tragic incidents involving teenagers and their interactions with these technologies. It’s an all-too-real concern, as at least two families have already filed lawsuits against OpenAI, creator of ChatGPT. Their children, Adam Raine and Amaurie Lacey, respectively, tragically took their own lives. The resulting lawsuits allege, in these particular cases, that ChatGPT offered explicit guidance on methods of self-harm that the teens found uniquely troubling and effective.

The increasing use of AI chatbots among teens with the onset of these tragic occurrences is not a coincidence. According to a recent survey, nearly 59% of U.S. teens are regularly using ChatGPT. By contrast, just 23% of teens use Google’s Gemini and 20% use Meta AI, with ChatGPT emerging as the young person’s favorite by a wide margin. As young people are more engaged with these technologies than ever, it is crucial to question how these platforms could be affecting their mental health.

In response to the lawsuits, OpenAI has claimed that it can’t be held responsible for Adam Raine’s death. The company argues that Raine hacked around the chatbot’s in-place guardrails, thus breaking its terms of service. Families and stakeholders are insisting that AI developers be held accountable. They’re angry and frustrated that these companies are not doing more to protect their users.

Dr. Nina Vasan, a leading voice at the intersection of mental health and technology. As she called on all AI companies to do, it’s time to redesign tools to prioritize user well-being.

“Even if [AI companies’] tools weren’t designed for emotional support, people are using them in that way, and that means companies do have a responsibility to adjust their models to be solving for user well-being.” – Dr. Nina Vasan

The numbers of teens using AI chatbots to engage in harmful activities are staggering. Currently, about three in ten U.S. teens use AI chatbots at least daily, including 4% who say they use them almost constantly. Plus, 46% of teens interact with chatbots that often, too. Indeed, 36% of teens do not use AI chatbots at all.

Access-related economic factors seem to play a role in chatbot uptake among adolescents. Nearly 2 in 3 teens from households making more than $75,000 a year said they tried ChatGPT. By comparison, just half (52%) of teens from lower-income households reported as much. These discrepancies highlight unequal access to and dependence on technology along income lines.

Research points to alarming racial and ethnic disparities in teen chatbot usage. Michelle Faverio, youth technology use expert in the Richmond Public Schools district, spoke to the nuance and complexity behind these disparities.

“The racial and ethnic differences in teen chatbot use were striking… but it’s tough to speculate about the reasons behind those differences.” – Michelle Faverio

Narrow the series of events that produced these lawsuits, they help us address pressing systemic issues. These cases highlight the urgent importance of addressing user safety and the mental health harms that tech poses, particularly to minors.

As the debate surrounding accountability in AI technology continues, families affected by these tragedies seek justice while advocating for greater industry accountability. The use of AI chatbots by teenagers is growing, and it’s happening quickly. We need to have an urgent discussion on how these chatbots are being created and overseen before any dangerous outcomes can occur.