Anthropic, a for-profit artificial intelligence startup, is challenging the Department of Defense’s recent decision to designate the company as a supply chain risk. This decision, made official six days ago, has sparked controversy as it could impact Anthropic’s ongoing support of U.S. operations, particularly in Iran. The company isn’t afraid to wade into the legal thicket surrounding the DOD’s procurement choices. These determinations allow the Pentagon substantial leeway in decisions affecting our national security.
Furthermore, Dario Amodei, CEO of Anthropic, has gone on record denouncing the DOD’s assessment as “legally unsound.” He argues that the memo that designates Anthropic as a supply chain risk is outdated. It misrepresents, he argues, their current status and capabilities. Policy with teeth Anthropic is firmly committed to providing its AI models to the Defense Department. They plan to keep doing so at a “nominal cost” indefinitely.
The DOD took its decision even as Anthropic is apparently knee-deep in projects that further U.S. military operations in Iran. This increased engagement complicates the potential impacts of the supply chain risk designation. Amodei has stated, “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.”
In a curious parallel, OpenAI has just agreed to a contract with the Defense Department to take over portions of Anthropic’s work. OpenAI employees are reportedly rebelling against this horrible partnership. They are apprehensive about the reputational considerations of working with the military. In the story, Amodei described OpenAI’s engagement with the DOD as “safety theater.” He hinted that such pacts could be inadequate in meeting new national security priorities.
The legal landscape surrounding Anthropic’s challenge to the merger is murky at best. The law regulating federal contracting further restricts the ways companies can challenge these decisions. Dean Ball, a legal expert, noted, “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue…There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”
Given all of these hurdles, Anthropic intends to base its legal challenge in a federal court, probably in Washington. This move underscores their commitment to ensuring that American soldiers and national security experts continue to have access to essential tools and technologies.
Most recently, a leaked memo from Anthropic misfired and saw Amodei forced to apologize for the confusion. He noted that the company did not release the memo on purpose or instruct any third-party organizations to release it.
Like all AI companies, Anthropic is facing a stormy moment. Its leadership is constantly looking for ways to better serve its clients while working around the barriers that government designations and restrictions often create. They continue to assert that their greatest concern is making sure that American personnel continue to have access to important resources in the AI space.


