Anthropic Faces Legal Scrutiny Over Erroneous Citation by AI Chatbot

In a recent legal battle involving music publishers, Anthropic‘s lawyer acknowledged utilizing an erroneous citation generated by the company’s Claude AI chatbot. From testimony at federal court in Northern California, the presiding judge Susan van Keulen ordered Anthropic to respond to claims related to the wrongful citation. It was this testimony that her directive responded…

Lisa Wong Avatar

By

Anthropic Faces Legal Scrutiny Over Erroneous Citation by AI Chatbot

In a recent legal battle involving music publishers, Anthropic‘s lawyer acknowledged utilizing an erroneous citation generated by the company’s Claude AI chatbot. From testimony at federal court in Northern California, the presiding judge Susan van Keulen ordered Anthropic to respond to claims related to the wrongful citation. It was this testimony that her directive responded to.

That erroneous citation was later found in a court filing related to a lawsuit. Universal Music Group and other major music publishers filed the suit against Anthropic. This case exposes the many tactical battles currently being fought between copyright owners and technology companies. These disputes focus on the ways that these companies utilize content within their generative AI tools. Anthropic is under immense pressure to rebut its expert witness, Olivia Chen, this challenged point. In response, critics accuse Claude of using her to cite inauthentic articles to provide cover in her testimony.

According to Anthropic’s filing, Claude produced a citation that included “an inaccurate title and inaccurate authors.” The company further clarified that its “manual citation check” was unable to catch the error caused by the AI’s hallucinations. This denial of service is no isolated incident. Earlier this year, Anthropic ran into similar issues when Claude generated fake citations in legal documents for an Australian attorney.

In the Australian incident, an attorney general had ChatGPT craft legal filings. As a result, the wrong citations made it into the produced work. Such incidents are highly problematic, especially given the growing importance of AI-generated content in legal contexts.

As Judge van Keulen has ordered Anthropic to respond, it’s clear that the clock is ticking. The company is now being sued by several copyright owners who say that tech companies are illegally using their work to build AI tools.

Bloomberg was first to report Anthropic’s admission on the AI-related miscite in its expert report. The company characterized the incident as “an honest citation mistake and not a fabrication of authority,” attempting to clarify its position amid growing scrutiny.

Anthropic’s legal challenges reflect broader trends within the tech industry as it navigates the complexities of intellectual property rights and the ethical implications of generative AI. As the case proceeds, we’ll be following it here. The court will soon have to address the risks associated with relying on AI-generated materials in litigation.