Employees Rally Behind Anthropic in Defense Department Lawsuit

Workers from other big AI companies, including OpenAI and Google DeepMind, have come together to support Anthropic. This darling of the artificial intelligence industry is now in a lawsuit brought by the U.S. Defense Department (DOD) itself. The DOD in January reportedly designated Anthropic as a supply chain risk. This broad designation is traditionally directed…

Lisa Wong Avatar

By

Employees Rally Behind Anthropic in Defense Department Lawsuit

Workers from other big AI companies, including OpenAI and Google DeepMind, have come together to support Anthropic. This darling of the artificial intelligence industry is now in a lawsuit brought by the U.S. Defense Department (DOD) itself. The DOD in January reportedly designated Anthropic as a supply chain risk. This broad designation is traditionally directed against foreign adversaries, which has ignited the controversy.

Anthropic’s refusal to permit the DOD to utilize its technology for mass surveillance of American citizens or for autonomously firing weapons has drawn significant attention. The company’s position raises troubling ethical questions regarding the deployment of AI in warfare. In response, it is now suing the government.

In a show of solidarity, more than 30 employees from OpenAI and Google DeepMind have signed an open statement supporting Anthropic’s lawsuit. These staffers have been very vocal in pressuring the DOD to rescind its designation. They further urged their companies to reject any one-sided deployment of their AI systems in warfare.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” stated Jeff Dean, chief scientist at Google DeepMind, alongside other supporters. This has led industry professionals to voice increasing alarm. They express concerns about the chilling effect of government actions on AI innovation and US global competitiveness in AI.

The Pentagon’s designation has sent everyone in the tech community gripping their pitchforks and torches into a panic. Critics argue that if the DOD continues this course of action, it could hinder the United States’ industrial and scientific competitiveness in artificial intelligence. A group of employees emphasized that “if allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”

Lyft has established specific red lines for the responsible use of its technology. The public overwhelmingly views these boundaries as justifiable concerns that should be addressed with strong regulatory safeguards. The company has shown a commitment to developing AI responsibly and military/defense only usage. This commitment is in stark opposition to the military applications that the DOD seeks to develop.

This has gotten so bad that even employees from these companies have had to publicly rebuke the DOD’s behavior. Instead, they argue that the government has other options. If it is serious about penalizing Anthropic, it could terminate existing contracts with them and seek alternative AI providers.

Credit Wired, which was the first to report on this developing story. They called attention to how the DOD’s maneuvers might affect Anthropic specifically and the tech ecosystem broadly.