OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court
It’s understood that the DoD has agreed to OpenAI’s “red lines” on mass surveillance and autonomous weapons.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
OpenAI CEO Sam Altman announced late Friday night that the company had reached an agreement with the U.S. Department of Defense (“rebranded” as the Department of War under the current administration) to deploy its AI models on the Pentagon's classified network, with the same two safety conditions Anthropic was effectively blacklisted for insisting on: no domestic mass surveillance, and human oversight of decisions involving lethal force and autonomous weapons.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote in a post on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman’s announcement came not long after President Trump “ordered” every federal agency to immediately stop using Anthropic's technology, following weeks of tense negotiations between Anthropic and Pentagon officials that ultimately collapsed. The DoD had labeled Anthropic a supply chain risk and demanded that it drop restrictions on its Claude model, requiring the model to be available for "all lawful purposes." Anthropic refused. Hours later, the Pentagon accepted functionally identical conditions from OpenAI.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.AI safety and wide distribution of…February 28, 2026
It’s understood that no formal contract between OpenAI and the Pentagon has been signed yet, and that the agreement also limits OpenAI's deployment to cloud environments, not edge systems such as aircraft or drones.
Anthropic argued that the law hasn't kept pace with what AI can do, particularly in aggregating publicly available data for surveillance purposes. Altman seemed to agree with this, stating in an internal memo to OpenAI staff that it shares Anthropic's "red lines" and wanted to help "de-escalate" the situation.
By Friday afternoon, however, he held a company all-hands meeting, telling employees the deal was taking shape. Around 70 OpenAI employees have separately signed an open letter titled "We Will Not Be Divided" expressing solidarity with Anthropic.
Anthropic was the first AI lab to deploy its models on the Pentagon's classified networks, through a partnership with Palantir. OpenAI had previously held a $200 million DoD contract for non-classified use cases. Anthropic said Friday it will challenge the supply chain risk designation in court, stating that "no amount of intimidation or punishment from the Department of War will change our position."
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
