OpenAI just made headlines with a big partnership enabling its artificial intelligence tech to be used in classified settings. CEO Sam Altman announced the landmark decision to incorporate OpenAI’s artificial intelligence into the country’s national security applications. This data integration would occur only under tight technical safeguards.
OpenAI’s apps are exploding in usage! Claude AI has now claimed the overall free app top spot in the U.S. App Store, with ChatGPT right behind in the number two slot. The partnership with the Pentagon marks a pivotal move for OpenAI, reflecting a commitment to harnessing AI responsibly within critical sectors.
The timing and nature of this agreement has raised red flags among some OpenAI staff. Most strikingly, Caitlin Kalinowski—who formerly spearheaded the creation of AR glasses at Meta—moved to OpenAI last November 2024. Kalinowski as saying the announcement seemed hasty, without appropriate governance frameworks to protect against misuse.
“To be clear, my issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.” – Caitlin Kalinowski
Kalinowski’s resignation comes on the heels of her warnings about what the Pentagon deal could mean. In a follow-up post on social media platform X, she elaborated on the lack of boundaries surrounding OpenAI’s technology. When it comes to building near sensitive areas, she said, having clear boundaries becomes all the more essential. Her exit brings further attention to the internal strife that has seemingly marred OpenAI as it contorts to fit its encroaching role in national security.
In its blog post announcing the settlement, OpenAI made strong claims about how it is committed to using AI responsibly. The organization stated, “We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.”
The agreement marks a noteworthy strategic shift for OpenAI as it steps into increasingly high-stakes territory, pursuing innovation while grappling with ethical implications. Conversations continue both inside the group and outside it. Industry experts and the general public alike will be watching intently to determine what this agreement could mean.

