Trump bans Anthropic AI from federal agencies after firm refuses to unlock capabilities — Anthropic cites risks of autonomous military applications, mass domestic surveillance
Every federal agency has been “ordered” to cease using Claude immediately.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
President Donald Trump ordered every U.S. Federal agency to stop using technology from AI company Anthropic on Friday, February 27, posting the directive to Truth Social at 3:47 PM ET — more than an hour before the Pentagon's own 5:01 PM ET deadline for Anthropic to comply with its demands.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,” Mr. Trump fumed on Truth Social, adding that he is directing every U.S. Federal agency to “IMMEDIATELY CEASE all use of Anthropic's technology.”
The dispute stems from a contract worth up to $200 million that Anthropic signed with the Pentagon last summer. Anthropic had sought written guarantees that its Claude models would not be used for mass domestic surveillance of U.S. Citizens or to control weapons systems capable of firing without human involvement. The Pentagon countered that it needed the right to deploy Claude for "all lawful purposes," arguing it was unworkable to negotiate individual use-case exemptions with a private company.
After months of private talks collapsed into a public standoff this week, Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" — a label typically reserved for companies from adversarial nations such as Huawei.
Trump, in his Truth Social post, accused Anthropic of "trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," adding that the company's position was "putting AMERICAN LIVES at risk." He gave agencies a six-month phase-out window and warned that if Anthropic failed to cooperate during that period, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
Claude was the only AI model approved for use in classified military systems, and defense software firm Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly. OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical “red lines,” complicating its candidacy as a direct replacement.
Unsurprisingly, Elon Musk has already agreed in principle to the Pentagon's "all lawful purposes" request, potentially lining up Grok as a replacement.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

-
blppt Reply“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,”"
And this is the guy who has the nuke codes, people. -
UnforcedERROR May as well lock this thread...Reply
I do appreciate the ethical response, I suppose. Likely won't change things in the long term though. -
USAFRet Meanwhile...Reply
Genai.mil, the DoD interface with Gemini, is alive and kicking.
Https://www.war.gov/News/Releases/Release/Article/4354916/the-war-department-unleashes-ai-on-new-genaimil-platform/