← May 3, 2026
tech power

The Pentagon Chose Its AI Partners. Anthropic Said No.

The Pentagon Chose Its AI Partners. Anthropic Said No.
The Verge / Getty Images

What happened

The Department of Defense signed classified AI deployment agreements with eight major technology companies on May 1, 2026: SpaceX, OpenAI, Google, Nvidia, Microsoft, Amazon Web Services, Oracle, and the startup Reflection AI. The deals allow these companies' frontier AI models to run on the military's highest-classification networks, IL6 and IL7, for what the Pentagon described as 'lawful operational use' in warfare. Anthropic was excluded after a months-long dispute in which the company demanded explicit safety guardrails preventing the use of its AI for autonomous weapons and mass surveillance. The Pentagon rejected those terms, then blacklisted Anthropic while pursuing the other seven. The agreements follow internal employee protests at Google and other firms against military AI contracts.

The Pentagon just drew a line in the industry: companies that accept safety constraints on their military AI get cut out; companies that don't get the contract.

Prediction Markets

Prices as of 2026-05-03 — the analysis was written against these odds

The Hidden Bet

1

The 'lawful operational use' language in these deals provides meaningful constraint on how the military uses AI.

The Pentagon defines 'lawful' internally, and there is no independent oversight mechanism specified in any of the public-facing documents. The phrase has no binding teeth outside the DoD's own interpretation of the laws of armed conflict.

2

Anthropic's refusal is a principled stand that will cost it commercially.

Polymarket has Anthropic closing a Pentagon deal by June 30 at 44%. The White House quietly reopened talks after Anthropic announced recent breakthroughs. This may be a negotiating posture, not a red line — Anthropic needs the revenue and the government needs Anthropic's safety credibility as cover.

3

The other companies signing these deals face significant internal or reputational blowback.

Google's employee protests over Project Maven in 2018 did not prevent the company from signing military contracts in subsequent years. The industry has normalized this. The protest this time was smaller and the company signed anyway.

The Real Disagreement

The crux is whether AI safety guardrails in military contracts are a genuine ethical constraint or a market differentiation play. Anthropic's position implies that some AI uses in warfare are categorically off-limits and that a company can enforce that through contract terms. The Pentagon's position implies that the government, not vendors, decides what constitutes lawful force. Both views are internally coherent. The real fork: should AI companies be able to veto government use of their technology in warfare? If yes, the logic extends to refusing to sell to any military — a position none of these companies hold. If no, then Anthropic's stand is contractually unenforceable and the refusal is performance. I'd lean toward the Pentagon's position being legally correct, but that just means the safety guardrails have to come through legislation, not vendor contracts — and there's currently no legislation.

What No One Is Saying

Every company that signed this deal is now on record as having agreed that the Pentagon's definition of 'lawful' is sufficient. If those systems are later used in ways that kill civilians or enable mass surveillance, those companies cannot claim they were unaware of the risk. The contracts are not just commercial agreements. They are liability assignments.

Who Pays

Civilian populations in conflict zones where these AI systems are deployed

Near-term, likely within the current Iran and broader Middle East operational context

AI-assisted targeting and 'decision superiority' in warfare has historically concentrated errors — when AI gets targeting wrong at scale, the errors are systematic, not random. The IL7 network deployment means these tools will be used in active conflict environments.

Anthropic's commercial prospects

Medium-term, over the next 12-18 months of procurement cycles

Being publicly blacklisted by the DoD signals to other federal agencies that the company is difficult to work with on security matters. Enterprise government contracts — which are substantial revenue — become harder to win.

AI safety researchers and advocates inside these companies

Immediate, as precedent

The deals establish a precedent that safety guardrails are optional add-ons that can be negotiated away, not non-negotiable constraints. The negotiating position of internal safety teams is weakened every time a competitor signs without those terms.

Scenarios

Anthropic Caves

Anthropic reaches a deal with the Pentagon by June 30, accepting modified language that it can frame publicly as a partial win. The 'safety' brand is preserved in press releases while the functional constraints are minimal.

Signal Watch for an Anthropic press release framing new safety 'commitments' from the DoD without specifying enforcement mechanisms. Polymarket has this at 44% by June 30.

The Line Holds

Anthropic remains excluded. The company doubles down on safety positioning, attracting enterprise clients who want to avoid military reputational risk. A two-tier AI industry emerges: defense-integrated and defense-neutral.

Signal Watch for Anthropic announcing major non-defense enterprise contracts worth over $500M, and for the DoD to publish no new Anthropic agreements through the summer.

Congressional Intervention

A bipartisan Senate bill mandates minimum safety standards for AI deployed on classified military networks, effectively forcing the DoD to engage with the terms it rejected from Anthropic.

Signal Watch for the Armed Services Committee to attach AI oversight language to the National Defense Authorization Act markup in June.

What Would Change This

If internal documents showed that the 'lawful use' terms in these deals actually include specific constraints on autonomous targeting — i.e., the Pentagon agreed to things it isn't advertising publicly — then the Anthropic position becomes less about principle and more about pricing. Also: if a future military AI incident causes civilian casualties and is traced to one of these eight companies' models, the political and liability calculus changes entirely.

Sources

DefenseScoop — Technical detail on the deal structure: IL6/IL7 classified networks, 'lawful operational use' language, and the scope of what each company is providing
CNN Business — Focuses on the Anthropic blacklisting: safety guardrails for AI in warfare were the sticking point, and the White House has quietly reopened talks with Anthropic after recent tech breakthroughs
TechCrunch — Frames the deal as accelerating the 'AI-first fighting force' doctrine; notes the Anthropic dispute was over usage terms allowing AI in warfare and national surveillance
The Verge — Covers the employee protest angle: Google workers had previously objected to military AI work; now the company signed anyway, as did xAI and Reflection AI

Related