Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for autonomous attacks, mass surveillance
The maker of Claude says the Dept. Of War's supply chain risk designation was retaliation for its AI safety policies.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Anthropic has filed two federal lawsuits against the Pentagon and other U.S. Federal agencies, seeking to overturn the Department of War's decision to designate the AI company a "supply chain risk," a label that blocks Pentagon suppliers and contractors from using its Claude models, and that national security experts say has historically been reserved for foreign adversaries.
The lawsuits, the first filed in the U.S. District Court for the Northern District of California and the second in the U.S. Court of Appeals for the D.C. Circuit, allege the Trump administration violated Anthropic's First Amendment and due process rights, according to Reuters. Anthropic is asking courts to vacate the designation, block its enforcement, and require federal agencies to withdraw directives to drop the company's tools. The company said the actions could jeopardize "hundreds of millions of dollars" in revenue in the near-term.
This dispute traces back to a contract renegotiation between Anthropic and the Department of War that collapsed in late February. The Pentagon wanted unrestricted access to Claude for "any lawful use," while Anthropic refused to remove two guardrails: a prohibition on using its models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. Citizens.
Article continues belowDefense Secretary Pete Hegseth formally issued the supply chain risk designation on February 27; Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post that same day, with a six-month phase-out period.
The Pentagon argued that private companies cannot dictate how the government uses technology in national security scenarios, and that Anthropic's restrictions could endanger American lives. Anthropic countered that current AI models are not reliable enough for fully autonomous weapons deployment, and that domestic surveillance at scale would violate fundamental rights.
The fallout has had immediate competitive consequences, with OpenAI’s Sam Altman controversially striking a new Pentagon deal within hours of Anthropic's new designation. Altman publicly stated that the Dept. Of Warshares OpenAI's principles on human oversight of weapons and opposition to mass surveillance. XAI, Elon Musk's AI company, is also understood to have since been cleared for use on classified Pentagon systems.
Anthropic was previously the first AI lab permitted to operate on the Dept. Of War's classified networks, and signed a contract worth up to $200 million with the department in July 2025. The Wall Street Journal has previously reported that Claude had been used in military operations, including intelligence assessments and target identification in the U.S.'s ongoing conflict with Iran, even after the Pentagon ousted the model.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its filing with the U.S. District Court.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
