Employees Rally Behind Anthropic in Lawsuit Against U.S. Defense Department

Anthropic, one of the most prominent artificial intelligence companies, is in hot water with a new lawsuit alongside the U.S. Defense Department (DOD). The Pentagon has already recognized it as a supply chain national security risk. This designation is usually reserved only for foreign adversaries and has alarmed many in the tech industry. The tech…

Lisa Wong Avatar

By

Employees Rally Behind Anthropic in Lawsuit Against U.S. Defense Department

Anthropic, one of the most prominent artificial intelligence companies, is in hot water with a new lawsuit alongside the U.S. Defense Department (DOD). The Pentagon has already recognized it as a supply chain national security risk. This designation is usually reserved only for foreign adversaries and has alarmed many in the tech industry. The tech giant has repeatedly and publicly shot down the DOD’s solicitation to deploy its technology for mass surveillance programs of American citizens. It prohibits the autonomous firing of weapons, which are its firm, non-negotiable red lines.

More than 30 of their colleagues at OpenAI and Google DeepMind have come together in solidarity with them. Their filing of this amicus brief was to join Anthropic’s legal defense against this lawsuit. Consequently, Anthropic’s lawyers claim that the DOD’s classification of Anthropic as a supply chain risk represents an arbitrary use of power. Beyond this specific action, the precedent set would chill candid conversations on research that explores potential dangers and rewards of advanced AI technology. Employees have signed onto open letters encouraging the DOD to withdraw this designation. To further amplify this movement, workers at Amazon, Google, and Microsoft have called on company leadership to divest from Anthropic.

Together, the powerful court filing represents the employees’ spirit. The op-ed makes a few critical points, starting with the claim that Anthropic’s fears regarding the harmful potential of its own technology ring true and necessitate robust safety precautions. They argue that if we let the DOD get away with this kind of behavior we risk hurting U.S. industries. This would especially damage our competitiveness in the generative AI space.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” – Jeff Dean, chief scientist at Google DeepMind

The controversy surrounding Anthropic isn’t just legal implications. It endangers the other AI innovation with potential risks. The employees’ statements indicate that a unilateral approach to utilizing AI technology, particularly in defense contexts, could undermine innovation and collaboration across the sector.

TechCrunch now plans to hold their event October 13-15, 2026, in San Francisco. Undoubtedly, it will provide useful context and lessons learned to the rapidly accelerating policy conversations around the world regarding AI ethics and governance. Rebecca Bellan Rebecca is a senior reporter for TechCrunch. She reports on the intersection of business and policy in AI and is excited to be among others discussing these important issues.

“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” – Court filing by OpenAI and Google DeepMind employees

The saga is still developing, as all stakeholders keeping a close eye on the results of this legal challenge should be. Now Anthropic is pushing back against what it believes is an unfair classification. At the same time, industry leaders need to do their part to make sure that ethical considerations remain in the forefront of AI development.