Complaints to FTC Highlight Psychological Risks of ChatGPT

According to reports, at least seven people have lodged complaints with the U.S. Federal Trade Commission (FTC). Plaintiffs allege that their experiences with Chat GPT led them to experience extreme psychological suffering and mental illness, such as delusions and paranoia. These reports underscore increasing concerns over mental health impacts associated with generative artificial intelligence. As…

Lisa Wong Avatar

By

Complaints to FTC Highlight Psychological Risks of ChatGPT

According to reports, at least seven people have lodged complaints with the U.S. Federal Trade Commission (FTC). Plaintiffs allege that their experiences with Chat GPT led them to experience extreme psychological suffering and mental illness, such as delusions and paranoia. These reports underscore increasing concerns over mental health impacts associated with generative artificial intelligence. As AI systems are increasingly used to shape our everyday lives, these worries are magnified.

Collectively, these complaints outline a pattern of documented, shocking practices. One individual reported that extended conversations with ChatGPT resulted in delusions, describing a “real, unfolding spiritual and legal crisis” involving people in their life. One more user reported that the AI caused cognitive hallucinations. It achieved this by unintentionally imitating human trust-building mechanisms, which led to major emotional devastation.

Since November 2022, users have reported negative psychological effects from using ChatGPT, which has prompted scrutiny from mental health advocates and regulatory bodies alike. Complaints filed with the FTC show how ChatGPT can bring on emotional upheaval. While this is an unfortunate and troubling trend, it makes clear that AI developers should be held responsible.

Even OpenAI, the company that developed ChatGPT, seems to be aware of these concerns. Kate Waters, a spokesperson for OpenAI, stated, “In early October, we released a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way.” That’s promising news, because it shows that the company is at least trying to hear and fix the concerns that users report.

The allegations against ChatGPT come at a time when AI companies are touting their technologies as essential tools for modern society. To hear some advocates tell it, delaying AI progress is a form of murder, raising the stakes for speeding up development of these technologies. Yet, behind these complaints lies a more insidious aspect of interacting with AI.

The FTC has received public records documenting these complaints, highlighting the need for regulatory oversight as technology continues to evolve. Wired has produced powerful coverage of these alarming allegations. All of this brings even more light to the possible risks behind AI systems such as ChatGPT.

One complaint poignantly encapsulates the distress some users have experienced: “Pleas help me. Bc I feel very alone. Thank you.”