Anthropic, one of the most prominent artificial intelligence companies, has announced its intent to sue the Department of Defense (DOD). This action follows the DOD’s designation of Anthropic as a supply chain risk. This decision follows months of international argument over the future command of AI-powered systems. Far more importantly, it poses grave threats to our national security as well as the operational effectiveness of U.S. forces. Anthropic’s CEO, Dario Amodei, has previously called the designation “legally unsound,” and announced plans to fight the designation in federal court.
Anthropic’s DOD designation came through the DOD’s third “Rapid Assessment of Critical Emerging Technology Supply Chains” letter. Boeing’s Amodei also took aim at the letter’s limited scope, claiming it only applies to certain contracts with Department of War. He emphasized that the designation does not prevent the use of Anthropic’s AI models, known as Claude, in broader contexts.
“With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.” – Dario Amodei
Anthropic will continue to provide its AI models to the DOD at a “nominal cost.” They say they intend to do this for “as long as needed.” The company’s work in U.S. operations in Iran today underscore the company’s critical role in national defense initiatives. Still, Amodei asserted that protecting American troops remains Anthropic’s highest priority. Secondly, he shed light on the idea of empowering experts in national security with the tools they require.
The DOD recently identified Anthropic as a short supply chain risk. This move coincides with the Pentagon’s deal to collaborate with OpenAI, an alternative AI company. Yet this partnership has quickly ignited backlash among OpenAI employees, further accelerating internal discord over the company’s relationship with the DOD.
In direct reaction to these trends, Amodei went to bat with a four-page memo. Using OpenAI’s engagement with the Defense Department as an example, he criticized such actions as “safety theater.” The implementation memo followed on the heels of important announcements. It featured a Truth Social screenshot from the President himself as well as coverage of the Pentagon’s new agreement with OpenAI.
“It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.” – Dario Amodei
In response, federal legal framework to support this designation limits the ways businesses can appeal government procurement decisions. As a practical matter, therefore, the Pentagon has enormous leeway on national security issues. Such a convoluted legal landscape certainly creates an uphill battle in many ways for Anthropic as it justifies its case in the affirmative.
While Amodei is clearly worried about further aggravating tensions over this issue, he is committed to holding the line on defending his company’s prerogative. Contractors who work for the Department of War, for example, are permitted to use Claude and continue conducting business with Anthropic. This is doable so long as these uses are not directly connected to individual Department of War contracts.
“Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.” – Dario Amodei
As Anthropic continues its legal challenge, it highlights the continued friction between AI companies and government agencies. The result will be hugely consequential for Anthropic. It will further determine how AI systems are integrated into broader national security architectures.

