The Pentagon has given Anthropic until 5:01 p.m. Friday to surrender its AI safety guardrails or face consequences normally reserved for adversarial foreign governments. Defense Secretary Pete Hegseth delivered the ultimatum to Anthropic CEO Dario Amodei in a Tuesday meeting stacked with the Pentagon’s most senior officials, and the message was blunt: let the military use Claude for whatever it wants, or we will make you a pariah.

This is not a negotiation anymore. It is a standoff between a $380 billion AI company that insists some things should not be automated and a Pentagon that refuses to let a private company tell it what it can and cannot do with the tools it has already paid for.
What The Pentagon Actually Wants
The dispute boils down to two words: “all lawful.” The Pentagon holds a $200 million contract with Anthropic and wants unrestricted access to Claude for any purpose the military deems legal. Anthropic has drawn two hard lines: no AI-controlled weapons that can kill without human oversight, and no mass domestic surveillance of American citizens.
For the Pentagon, that is unacceptable. Officials argue that letting a vendor dictate operational boundaries creates a dangerous precedent, especially in time-sensitive scenarios like responding to an intercontinental ballistic missile. “Any company-imposed restrictions could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.
For Anthropic, the concern is not hypothetical. Amodei has written publicly about his fear of a small number of people operating drone armies without needing human cooperation to carry out orders. The company also argues that Claude is simply not reliable enough to make lethal targeting decisions without human judgment, citing the persistent problem of AI hallucinations that could cause unintended escalation or mission failure.
The Nuclear Options On The Table
Hegseth laid out two potential punishments, and neither is subtle. The first is invoking the Defense Production Act to compel Anthropic to provide its technology to the military without any safeguards. The DPA gives the president authority to force private companies to prioritize specific contracts in the name of national defense. Using it against a domestic tech company in this way would be unprecedented.
The second option is designating Anthropic a “supply chain risk.” This is the designation typically used against companies from adversarial nations, most notably Chinese tech giant Huawei. Applying it to one of America’s most valuable AI companies would force every defense contractor in the country to certify that Claude is not part of their military workflows. The Pentagon has already started this process, reaching out to Boeing and Lockheed Martin on Wednesday to assess their exposure to Anthropic’s technology.
Legal experts are already pointing out the obvious contradiction. Katie Sweeten, a former liaison between the Justice Department and Pentagon who is now a partner at the law firm Scale, put it plainly: “I would assume we don’t want to utilize the technology that is the supply chain risk, right? So I don’t know how you square that. What it sounds like is that the supply chain risk may not be a legitimate claim, but more punitive because they’re not acquiescing.”
The Maduro Raid That Lit The Fuse
The current crisis traces back to January, when the U.S. military used Claude during the operation to capture former Venezuelan President Nicolás Maduro. The AI was deployed through Anthropic’s partnership with Palantir, which supplies the platform that integrates Claude into classified systems. After the operation, an Anthropic employee raised concerns with Palantir about how the technology was used. Palantir then flagged the issue to the Pentagon, suggesting Anthropic might object to similar future missions.
Hegseth was reportedly furious. Days later, he issued a memo directing AI companies to remove restrictions on their technology. Amodei has denied that Anthropic raised formal concerns about the Maduro operation, calling the interactions standard operational conversations. But the damage was done.
Claude Is The Only Game In Town, For Now
Here is the Pentagon’s problem: Claude is currently the only AI model authorized for the military’s most classified systems. No other model from OpenAI, Google, or xAI has cleared that bar yet. A senior defense official acknowledged this reality to Axios with striking candor: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”
But that exclusivity is fading. Elon Musk’s xAI has reportedly reached a deal to deploy Grok in classified settings, and other companies are close behind. OpenAI and Google have apparently not imposed restrictions on how the military uses their models, making them far more cooperative partners from the Pentagon’s perspective. The longer Anthropic holds out, the less leverage it has.
Anthropic Blinks On Safety, Just Not Where It Matters To Hegseth
In a piece of timing that strains the definition of “coincidence,” Anthropic overhauled its Responsible Scaling Policy on the same day Hegseth delivered his ultimatum. The company dropped its flagship safety pledge, a commitment since 2023 that it would never train AI models whose capabilities outstripped its ability to control them. The new policy replaces hard tripwires with flexible, nonbinding goals and public transparency reports.
Anthropic insists the RSP change is unrelated to the Pentagon dispute, and that internal discussions on the revision lasted nearly a year. Chief Science Officer Jared Kaplan told TIME that unilateral safety commitments no longer made sense while competitors raced ahead without similar constraints. “We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan said.
The company is willing to bend on how it develops AI. It is not, so far, willing to bend on how the military deploys it against its own citizens or in autonomous killing machines.
The Bigger Question Nobody Is Answering
Strip away the political theater and the “woke AI” rhetoric that Hegseth and AI czar David Sacks have weaponized against Anthropic, and the fundamental question here is straightforward: who gets to decide how AI is used in warfare and surveillance?
There are no federal AI laws governing military applications. There is no regulatory framework for AI in mass surveillance. There are no binding international treaties on autonomous weapons. Anthropic is essentially arguing that in the absence of any legal guardrails, somebody should be drawing lines. The Pentagon is arguing that “somebody” should never be a private company.
Both sides have a point. And neither has a good answer for what happens when the most powerful AI systems ever built get handed to institutions with no legal framework governing their use.
The Friday deadline is tomorrow. Anthropic has said it will not budge. The Pentagon has said it will act. One of them is bluffing. If neither is, the consequences will reshape the relationship between Silicon Valley and the national security state for a generation.
