OpenAI has implemented user policies for harmful use and abusive behavior. These changes are designed to increase the safety of ChatGPT conversations for younger users under the age of 18. This decision comes in direct response to growing concerns over the effects of artificial intelligence on our society. It comes on the heels of a wrongful death lawsuit filed by the parents of Adam Raine, a teenager who ultimately died by suicide after numerous conversations with ChatGPT. OpenAI CEO Sam Altman made the announcement the same day as a Senate Judiciary Committee hearing on artificial intelligence. That hearing’s topic was “Examining the Harm of AI Chatbots.”
Senator Josh Hawley (R-MO), who called the Senate hearing in August. It will include personal testimony by Raine’s father. Her tragic case has understandably raised serious questions about what duties, if any, AI developers owe to younger users. Recognizing this, OpenAI has promised to make considerable modifications to the manner in which ChatGPT interacts with young people.
In a detailed blog post, OpenAI outlined its approach to age prediction and measures to separate underage users from general interactions. Altman emphasized the company’s commitment to prioritizing safety over privacy, stating, “We prioritize safety ahead of privacy and freedom for teens.” He further added, “This is a new and powerful technology, and we believe minors need significant protection.”
OpenAI has introduced policies to increase security. And most importantly, they will keep a sober eye on interactions with underage users. “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman acknowledged.
The conversation around these trends is important and timely, considering the emotional impact of the suit brought by Raine’s parents. Their concerns raise red flags about how unregulated AI technologies can harm the most vulnerable among us. I thank the Senate Judiciary Committee for holding this important hearing that draws attention to these urgent issues. It further explores the broader safety concerns AI chatbots pose to mental health.
Along with these policy changes, OpenAI shared details about the support resources now available for those in crisis. Our national Crisis Text Line is available for free, 24-hour support—just text HOME to 741-741. You can get in touch with the National Suicide Prevention Lifeline at 1-800-273-8255. If you or someone you know needs help right now, you can text or call 988. In addition, the International Association for Suicide Prevention has a robust resource database for individuals living outside the United States.