Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons — Anthropic refuses to lower AI guardrails for the Pentagon

Andrew Ross Sorkin and Dario Amodei (L:R) speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City.
(Image credit: Getty Images)

Anthropic has issued a statement refusing to lower guardrails for Claude at the request of the Department of Justice. The Pentagon gave Anthropic until Friday to comply or have its $200m contract cancelled, along with more serious repercussions like designating it a supply-chain risk. The company let the time run out, and its CEO Dario Amodei, has now said that it "cannot in good conscience" accept the DoD's demands.

There are two main points of contention — mass domestic surveillance and fully autonomous weapons — that Anthropic has taken a concrete stance on. It argues that monitoring American citizens at large is inherently undemocratic and undermines individual liberty. The company adds that AI-led surveillance is dangerous and only allowed because the legal precedent has not yet caught up.

The company goes on to mention that Frontier AI is not ready to use fully autonomous weapons since it's incapable of human-like judgment. "We will not knowingly provide a product that puts America’s warfighters and civilians at risk," said Amodei. Partially unmanned weapons are "vital to the defense of the democracy," according to Anthropic, but AI cannot be trusted to select and kill targets on its own yet.

Anthropic offered to carry out R&D to improve reliability for these systems — where AI can be trusted enough to take control over automatically engaging subjects — but were turned down by the DoD. "[They] need to be deployed with proper guardrails, which don’t exist today," claimed Anthropic, referring to how no AI model can emulate an experienced trooper.

Both of these points are labeled as "exceptions" by Anthropic in its otherwise vocal support for working with the Pentagon. Throughout the statement, the outfit reiterates its desire to "continue to serve the Department and our warfighters—with our two requested safeguards in place."

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Hassam Nasir
Contributing Writer
  • Notton
    "Defense Secretary Pete Hegseth has threatened to label the company a supply-chain risk, a designation reserved for adversarial outfits that has never been put on an American company before."

    I believe that's called blackmailing.
    Reply
  • bigdragon
    Haven't creators written dozens of successful novels, filmed many big budget movies, and produced countless short form contents describing where AI with tracking and monitoring capabilities leads? Eagle Eye, Captain America Winter Soldier, iRobot, The Tower (1993), 2001 Space Odyssey, and more. Sure, they're fictional, but that doesn't stop someone from finding out if the fictional stories can become non-fiction reality if they're not careful.
    Reply
  • Blastomonas
    Good on Anthropic.

    An unusually principled stance for an AI company.
    Reply
  • USAFRet
    For now....
    Reply
  • bit_user
    I'm so glad they didn't cave! They just immediately became my favorite AI company.
    Reply