Early 2026 marked the turning point of the AI revolution in defense. OpenAI’s controversial contract with the Department of Defense. Designed like this, the setup triggered huge blowback from the public, most famously causing a 295% increase in uninstalls of ChatGPT. Meanwhile, Anthropic’s AI product, Claude, quickly rose to prominence in the App Store charts, capturing attention amid the unfolding situation.
OpenAI’s contract with the Pentagon shocked many in the tech community and users of OpenAI’s tools. Thousands of you raised your indignation at OpenAI’s collaboration with the defense department in our TOS. They viewed it as a gross potential misuse of AI technology. Countless users decided to uninstall ChatGPT in wake of this revelation, a clear sign of increasing concern regarding AI’s expanding role in defense applications.
Anthropic’s Claude was a fast newcomer that became an instant top competitor. It quickly rose to #2 on the App Store as users rushed to find viable alternatives to ChatGPT. The company even publicly lobbied against the Pentagon’s potential use of Claude. The negotiations broke down at the last moment. That failure left Anthropic in a deep bind. It had to wrestle the broader fallout of the Trump administration designating the company a supply-chain threat.
Anthropic has now chosen to fight this designation in court. We’re excited that they are committed to advancing transparency and the ethical use of AI. Throughout this tumultuous period, both OpenAI and Anthropic maintained a public stance advocating for restrictions on how their AI technologies are utilized.
The events surrounding these companies were further complicated when Caitlin Kalinowski, an OpenAI executive, resigned over concerns that the deal was rushed and lacked adequate safeguards. Her exit exposed cracks and internal opposition to the firm’s big-picture strategic moves and cozy ties with the defense industry.
Kirsten, a prominent figure in the tech industry, commented on the implications of these events for startups considering similar paths. As she noted, if this is what’s required to start a new startup, any sane startup would think twice. She worries that the criticism of OpenAI might dissuade other companies from pursuing similar partnerships in the defense space.
Top tech innovators met at the TechCrunch Disrupt conference in San Francisco from October 13-15, 2026. Through their advocacy, they sought to raise awareness of the ethical implications of AI in warfare. The conference served as a platform for industry experts to deliberate on the future of artificial intelligence and its integration into sensitive sectors such as national defense.
Several speakers underscored the need for strong ethical guidelines and frameworks around ethical AI applications. They emphasized that any collaboration with defense agencies should come with robust safeguards to prevent harmful use.
“You cannot change the terms in this way.” – Sean
It’s time for companies to be held accountable when creating technologies that have the ability to drastically impact society. As startups navigate these complex waters, they must weigh potential opportunities against ethical considerations and public sentiment.

