Privacy Group Files Complaint Against ChatGPT for Defamatory Falsehoods

In a significant development, a privacy complaint has been filed against OpenAI by Noyb, a privacy rights advocacy group, concerning the generation of false and defamatory information by ChatGPT. The controversy arose after the AI tool produced incorrect information about Hjalmar Holmen, claiming he was convicted of child murder and sentenced to 21 years in…

Lisa Wong Avatar

By

Privacy Group Files Complaint Against ChatGPT for Defamatory Falsehoods

In a significant development, a privacy complaint has been filed against OpenAI by Noyb, a privacy rights advocacy group, concerning the generation of false and defamatory information by ChatGPT. The controversy arose after the AI tool produced incorrect information about Hjalmar Holmen, claiming he was convicted of child murder and sentenced to 21 years in prison. This misinformation was generated in response to a query about Holmen, alongside some true details, such as him having three children. The complaint, submitted to the Norwegian data protection authority, argues that ChatGPT's actions constitute a breach of the EU's General Data Protection Regulation (GDPR), which mandates the accuracy of personal data.

Noyb's complaint underlines concerns about ChatGPT's propensity to generate false information, commonly referred to as "hallucinations." These concerns are not isolated to Holmen's case. Other instances cited include false claims about an Australian major being involved in corruption and a German journalist being accused of child abuse. Despite ChatGPT's disclaimer acknowledging potential inaccuracies, Noyb argues that OpenAI has not fulfilled its obligation to ensure data accuracy under GDPR.

“The GDPR is clear. Personal data has to be accurate,” Joakim Söderberg stated. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

Noyb has emphasized that OpenAI does not provide a mechanism for individuals to correct incorrect information generated by ChatGPT. This lack of recourse is particularly concerning given the GDPR's provision allowing individuals to request rectification of personal data, including the correction of false information. The absence of such a system leaves individuals potentially vulnerable to reputational harm resulting from inaccuracies.

“If hallucinations are not stopped, people can easily suffer reputational damage,” Kleanthi Sardeli from Noyb remarked.

The Norwegian data protection authority is currently assessing its competency to investigate Noyb's complaint. Should it decide to proceed, the authority may undertake an investigation that could lead to penalties for OpenAI. Additionally, due to OpenAI's operational presence in Ireland, the investigation might be referred to Ireland's Data Protection Commission (DPC), which oversees product decisions impacting European users.

OpenAI has faced previous GDPR-related complaints regarding ChatGPT's generation of incorrect personal data, including inaccuracies related to birth dates and biographical details. These ongoing issues have amplified calls for regulatory scrutiny and compliance with data protection laws.

Noyb's complaint highlights broader implications for AI companies operating within the EU. It underscores the necessity for these entities to adhere to GDPR mandates and address the challenges posed by AI-generated falsehoods.

“AI companies should stop acting as if the GDPR does not apply to them, when it clearly does,” Kleanthi Sardeli urged.

The debate surrounding AI-generated content and data accuracy continues to evolve as regulatory frameworks strive to keep pace with technological advancements. The outcome of this complaint could have significant ramifications for AI developers and their responsibility in managing personal data accuracy.