Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons — Anthropic refuses to lower AI guardrails for the Pentagon
CEO Dario Amodei takes a strong stance against the Pentagon.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Anthropic has issued a statement refusing to lower guardrails for Claude at the request of the Department of Justice. The Pentagon gave Anthropic until Friday to comply or have its $200m contract cancelled, along with more serious repercussions like designating it a supply-chain risk. The company let the time run out, and its CEO Dario Amodei, has now said that it "cannot in good conscience" accept the DoD's demands.
There are two main points of contention — mass domestic surveillance and fully autonomous weapons — that Anthropic has taken a concrete stance on. It argues that monitoring American citizens at large is inherently undemocratic and undermines individual liberty. The company adds that AI-led surveillance is dangerous and only allowed because the legal precedent has not yet caught up.
The CEO gave an example of how the government can purchase data on any citizen without a warrant: emails, browsing history, movements, and more. This info is scattered and often superficial, but AI can help put it together into a cohesive profile. This practice is already controversial before any artificial intelligence is involved, so Anthropic won't let Claude serve as a tool in that process.
The company goes on to mention that Frontier AI is not ready to use fully autonomous weapons since it's incapable of human-like judgment. "We will not knowingly provide a product that puts America’s warfighters and civilians at risk," said Amodei. Partially unmanned weapons are "vital to the defense of the democracy," according to Anthropic, but AI cannot be trusted to select and kill targets on its own yet.
Anthropic offered to carry out R&D to improve reliability for these systems — where AI can be trusted enough to take control over automatically engaging subjects — but were turned down by the DoD. "[They] need to be deployed with proper guardrails, which don’t exist today," claimed Anthropic, referring to how no AI model can emulate an experienced trooper.
Both of these points are labeled as "exceptions" by Anthropic in its otherwise vocal support for working with the Pentagon. Throughout the statement, the outfit reiterates its desire to "continue to serve the Department and our warfighters—with our two requested safeguards in place."
The consequences of refusing to lower guardrails are not limited to just the elimination of an existing contract, however. Defense Secretary Pete Hegseth has threatened to label the company a supply-chain risk, a designation reserved for adversarial outfits that has never been put on an American company before. Moreover, it would stir up serious monetary concerns for Anthropic, too.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
The Pentagon also warned to invoke the Defense Production Act on Anthropic, a threat that Amodei says is contradictory to the department's prior "supply-chain risk" labeling. Under this act, any private company in the U.S. Can be forced to prioritize serving the government because its products are deemed too important for national security.
Anthropic seems unfazed by either threat and urges the DoD to reconsider its position. In the likely case the federal contract is pulled, Anthropic will still "work to enable a smooth transition to another provider" to avoid disruptions in military operations. This is the first time an AI outfit has taken such a concrete stance against the current administration, and the reaction has been largely positive in online circles.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

-
Notton "Defense Secretary Pete Hegseth has threatened to label the company a supply-chain risk, a designation reserved for adversarial outfits that has never been put on an American company before."Reply
I believe that's called blackmailing. -
bigdragon Haven't creators written dozens of successful novels, filmed many big budget movies, and produced countless short form contents describing where AI with tracking and monitoring capabilities leads? Eagle Eye, Captain America Winter Soldier, iRobot, The Tower (1993), 2001 Space Odyssey, and more. Sure, they're fictional, but that doesn't stop someone from finding out if the fictional stories can become non-fiction reality if they're not careful.Reply