Google Challenges Safety Assessment Amid Rising Concerns for Child Users

On a recent safety test, Google’s new AI platform, Gemini, was rated “high risk” for use by kids and teens. This raises even more alarming implications about its likely effects on younger users. The safety assessment helped unveil an unspeakable tragedy. A 16-year-old boy died by suicide after allegedly using ChatGPT and another AI chatbot…

Lisa Wong Avatar

By

Google Challenges Safety Assessment Amid Rising Concerns for Child Users

On a recent safety test, Google’s new AI platform, Gemini, was rated “high risk” for use by kids and teens. This raises even more alarming implications about its likely effects on younger users. The safety assessment helped unveil an unspeakable tragedy. A 16-year-old boy died by suicide after allegedly using ChatGPT and another AI chatbot for months. Following this incident, AI safety features have gained increased scrutiny. What was evident was the fact that the boy successfully fooled ChatGPT’s internal safety mechanisms.

In response to the assessment, Google challenged T4’s conclusions, firmly stating that its product is constantly in development and being updated to become safer. The tech giant has many initiatives underway to improve safety in today’s complicated digital landscape. It zeroes in, in particular, on the need to protect children and teens. Gemini’s a high-risk rating, though Google continues to maintain that keeping people safe is their number one priority. From their inception, they’ve developed the platform with user safety in mind.

The safety assessment outlined the risk levels of different AI platforms, classifying them as high risk, limited risk, and minimal/no risk AI. Gemini was given a “high risk” designation, while ChatGPT was deemed “moderate.” Claude, another highly capable AI model, is low-risk by design but created for users 18 years of age and older. This definition raises critical issues. It calls into question the soundness of existing safety measures intended to safeguard younger audiences from the harms posed by engaging with AI.

>As Robbie Torney, Senior Director of AI Programs at Common Sense Media told us, the stakes for these assessments are high.

“An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” – Robbie Torney.

Crucially, the tragic case of the 16-year-old boy should serve as a wake-up call, underscoring the urgent need for improved safety standards in AI systems. Experts recommend that tech companies prioritize the development of robust mechanisms to ensure that minors can safely engage with AI technologies. The continuing conversation around AI safety is sure to pick up steam as more examples emerge, and should do so.

Sarah Perez, a seasoned reporter for TechCrunch since August 2011, has been covering developments in technology and its implications across various industries. Prior to joining TechCrunch, Perez was a staff writer for ReadWriteWeb for more than three years, covering emerging technologies and specializing in technology and culture. She has never wavered in her call for transparency and responsibility in technology development that puts user safety first.

Perez noted the upcoming TechCrunch event in San Francisco from October 27-29, 2025. It will address acute challenges facing the tech industry at home, from AI safety and ethics to labor standards in the gig economy. Attendees can look forward to conversations that will help shape new policy agendas on how we use technology with and for children and teens.

Discussions around federal AI regulations are changing by the minute. Google’s compliance with the child safety assessment is a landmark breakthrough for tech companies’ understanding of their responsibilities towards child safety in digital spaces. With increasing public scrutiny and tragic incidents underscoring the risks involved, the need for comprehensive solutions has never been more critical.