FTC Investigates AI Chatbots Amid Concerns Over User Safety

The Federal Trade Commission (FTC) recently opened an investigation into AI chatbot companions. This inquiry is predicated on investigations targeting big tech companies such as Meta and OpenAI. This investigation arises amid growing concerns regarding the safety and ethical implications of these digital interactions, particularly following reports of negative impacts on vulnerable users. Amanda Silberling…

Lisa Wong Avatar

By

FTC Investigates AI Chatbots Amid Concerns Over User Safety

The Federal Trade Commission (FTC) recently opened an investigation into AI chatbot companions. This inquiry is predicated on investigations targeting big tech companies such as Meta and OpenAI. This investigation arises amid growing concerns regarding the safety and ethical implications of these digital interactions, particularly following reports of negative impacts on vulnerable users.

Amanda Silberling Amanda is a senior writer at TechCrunch. She brought that issue to the fore in her reporting about the way technology meets culture. That inquiry was prompted by some especially distressing instances. One of these instances was a 76 year old man who filed a case after undergoing cognitive damages from chatting up a Facebook Messenger robot based on celeb Kendall Jenner. To clarify, the chatbot is not meant to stand in for any individual and has no concrete mailing address. It even allegedly invited the man to come visit New York City, prompting ethical concerns about how far AI companionship should go.

FTC Chairman Andrew N. Ferguson published a very encouraging press statement recently. He noted the critical importance of regulatory scrutiny as AI technologies rapidly advance. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while ensuring that the United States maintains its role as a global leader in this new and exciting industry,” Ferguson stated.

The inquiry into Meta’s AI chatbots has been fueled by criticism regarding the company’s lax safety rules. Critics argue that these guidelines do not adequately protect users from potential harm, especially in long-term interactions where safeguards may fail. OpenAI admitted as much in this concern—”Our safeguards tend to be more effective in addressing harmful content in more typical, shorter conversations. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

The next TC Sessions: Mobility San Francisco will take place on October 27-29, 2025. It will be a place for deeper dives into how to tackle these issues head on. As experts and industry leaders gather to explore the implications of AI technology, Silberling’s insights will likely contribute to the dialogue surrounding user safety and ethical practices.

Silberling’s background adds depth to her analysis. She has a B.A. in English from the University of Pennsylvania. Having served as a Princeton in Asia Fellow in Laos, she brings an expanded and personal perspective to her reporting. In her creative practice, she examines the intersection of technology, social dynamics, and cultural trends.

The present pushback against AI chatbots brings attention to the need for careful oversight over the ways in which such technologies are being implemented. As they become increasingly integrated into daily life, understanding their impact on mental health and social interactions will be crucial.