
OpenAI announced a deal to deploy its models in classified environments for the Department of Defense. President Donald Trump directed federal agencies to stop using Anthropic’s technology after negotiations between Anthropic and the Pentagon fell through. Secretary of Defense Pete Hegseth designated Anthropic as a supply-chain risk.
The agreement follows the collapse of a separate negotiation between the Pentagon and Anthropic. OpenAI CEO Sam Altman admitted the deal was “definitely rushed” and caused significant backlash, leading to Anthropic’s Claude overtaking ChatGPT in Apple’s App Store. Altman stated the deal was intended to de-escalate tensions between the Department of Defense and the AI industry.
OpenAI published a blog post outlining prohibited use cases for its technology. The company stated it bans mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions such as “social credit” systems. The post claimed OpenAI retains full discretion over its safety stack and utilizes cloud deployment with cleared personnel in the loop.
OpenAI executives defended the agreement against criticism regarding potential surveillance. Techdirt’s Mike Masnick claimed the deal allows for domestic surveillance because it references compliance with Executive Order 12333. OpenAI’s head of national security partnerships, Katrina Mulligan, argued that deployment via cloud API prevents integration into weapons systems or sensors.
Anthropic previously stated it has red lines against the use of its technology in fully autonomous weapons or mass domestic surveillance. OpenAI CEO Sam Altman stated that OpenAI shares similar red lines. OpenAI’s blog post noted that the company does not know why Anthropic failed to reach a deal with the Pentagon.