Employees Rally Behind Anthropic in Legal Battle Against Defense Department

Anthropic, an AI company founded by former OpenAI researchers, is currently engaged in its own courtroom battle with the U.S. Department of Defense (DOD). This is a particularly alarming designation, as the DOD has labeled Anthropic as a supply chain risk, a designation usually reserved for foreign adversaries. To say the least, industry professionals are…

Lisa Wong Avatar

By

Employees Rally Behind Anthropic in Legal Battle Against Defense Department

Anthropic, an AI company founded by former OpenAI researchers, is currently engaged in its own courtroom battle with the U.S. Department of Defense (DOD). This is a particularly alarming designation, as the DOD has labeled Anthropic as a supply chain risk, a designation usually reserved for foreign adversaries. To say the least, industry professionals are troubled by this classification. They fear what it means for the company’s internal processes — and the external AI ecosystem.

The lawsuit is a result of Anthropic’s adamant rejection. For instance, they will not permit the DOD to leverage their technology to conduct mass surveillance of American citizens, or for the purpose of operating weapons autonomously. In a rare strongly-worded response, Anthropic made it clear that these types of applications go against the company’s stated ethical principles. The firm has culled down its “red lines.” It calls for effective guardrails that make it impossible to misuse its technology.

More than 30 OpenAI and Google DeepMind employees have signed a letter standing in solidarity. They filed that statement in support of Anthropic, laying it out like an amicus brief that might be filed in court. This brief bolsters Anthropic’s case against the DOD. It serves to highlight a growing exasperation among Silicon Valley’s elites with the federal government’s aggressive actions.

The employees challenged the DOD’s designation, arguing that it constitutes an unreasonable and capricious abuse of discretion. They fear that further pursuing this would deeply jeopardize the United States’ industrial and scientific edge in AI. This concern goes beyond our discipline.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” – Anthropic’s brief, signed by Google DeepMind chief scientist Jeff Dean and others.

Together, their testimony makes for a powerful public message from the employees. Specifically, they’re calling on their own companies’ leadership to stop the wasteful boondoggle of competing directly with Anthropic. Across the board, they call for a concerted opposition to the one-sided deployment of AI systems in armed situations. This shows their hunger for ethical standards to lead the way in responsible technological progress.

The debate around Anthropic’s lawsuit against the government underscores an important moment in the dynamic between the government and tech companies. As AI continues to develop, the ethics of its use are at the center of conversations in the industry.