← May 4, 2026
tech power

The Pentagon Replaced Anthropic. The Replacement Clause Is the Story.

The Pentagon Replaced Anthropic. The Replacement Clause Is the Story.
CNN

What happened

The Department of Defense announced agreements with eight AI companies, including OpenAI, Google, Microsoft, Amazon Web Services, SpaceX, Nvidia, Reflection, and Oracle, to deploy their AI systems on the Pentagon's highest-classification networks. The agreements specify use for 'lawful operational use,' a phrase that notably replaces the safety guardrails Anthropic had demanded for military deployments. Anthropic remains blacklisted as a supply chain risk following a dispute in which it refused to allow its technology to be used without ethical constraints on warfare applications. The Pentagon CTO separately flagged Anthropic's Mythos model as a distinct national security concern requiring network hardening across the entire government.

The Pentagon just told every AI company on earth that safety limits are a disqualification, not a selling point, and seven major firms just agreed.

Prediction Markets

Prices as of 2026-05-04 — the analysis was written against these odds

The Hidden Bet

1

The other seven companies are genuinely comfortable with unlimited military use

OpenAI, Google, and Microsoft all have internal safety teams and public commitments to AI ethics. Their agreement to 'lawful operational use' without Anthropic-style constraints does not mean those companies have resolved the internal tension. It means they chose commercial access over ethical consistency. That tension does not disappear; it just becomes invisible.

2

Anthropic's exclusion is primarily a business dispute that will resolve

Polymarket puts only a 22.5% chance of an Anthropic-Pentagon deal by May 31 and 44% by June 30. The Mythos situation introduces a second track where the government views Anthropic not just as a difficult vendor but as a potential adversary. Those are different problems requiring different solutions, and the 'supply chain risk' label is easier to apply than to remove.

3

Military AI under 'lawful operational use' means human oversight remains in the loop

The phrase 'lawful operational use' is defined by the military, not by any external ethical body. What is lawful in classified operations is itself classified. The agreements give the DoD legal cover to deploy AI in ways that would trigger Anthropic's restrictions without any public accountability mechanism.

The Real Disagreement

The real fork is whether AI companies can set limits on how their products are used when the buyer is a government with legal authority to classify those uses. Anthropic bet it could; seven competitors bet it could not. Both positions have coherent logic. Anthropic's view is that an AI company that builds systems capable of mass surveillance or autonomous targeting bears moral responsibility for those uses, and can refuse. The other view is that governments have always set the rules of engagement for the weapons they procure, and software is no different. The lean here is that Anthropic's position was always unstable: a company cannot simultaneously seek government contracts at scale and retain veto power over government operations. The real question it faced was not ethics but strategy.

What No One Is Saying

The Mythos situation may be the more important story. The Pentagon CTO is treating Anthropic's advanced cyber-capable model as a threat to government networks, which implies that advanced AI systems are now themselves a category of national security vulnerability. If that framing takes hold, it reshapes who has power over AI deployment in ways that have nothing to do with safety or ethics.

Who Pays

Anthropic employees and investors

Already priced in; deepens over next two quarters

Being cut from the largest single buyer of AI services in the world at a moment when compute costs are outpacing revenue is existentially significant; Anthropic's $500M bet on custom chips assumes government-scale contracts

Civilians in conflict zones where US military AI is deployed

Ongoing; effects will not be publicly visible

AI-assisted targeting or surveillance without independent ethical guardrails increases the risk of misidentification and civilian harm in classified operations

AI safety researchers and policymakers

Sets a precedent with long-term effects on AI governance globally

The Pentagon has established a precedent that national security trumps safety requirements; this will be cited in every future attempt to impose usage restrictions on frontier AI

Scenarios

Anthropic Folds

Anthropic drops its safety restrictions for classified government deployments, accepts the 'lawful operational use' framing, and rejoins the Pentagon supply chain. Its safety commitments become marketing rather than policy.

Signal An Anthropic press release announces a new government partnership agreement without mentioning usage restrictions.

Mythos Standoff Escalates

The government's concern about Anthropic's Mythos model produces formal restrictions on Anthropic operating in the US, treating it like a foreign-influenced entity. Anthropic's entire business model collapses.

Signal CFIUS or DoJ opens an investigation into Anthropic's supply chain or investor relationships.

Safety Becomes a Differentiator Again

A classified AI deployment by one of the eight partner companies produces a documented error with visible casualties or civil liberties violations. Congressional pressure forces new usage restrictions, retroactively validating Anthropic's position.

Signal A whistleblower or inspector general report on AI-assisted targeting outcomes in an active conflict.

What Would Change This

If Anthropic's Mythos model produced documented national security benefits that outweigh the threat, the Pentagon would likely seek a deal rather than a permanent exclusion. Polymarket's 44% probability by June 30 suggests markets see this as genuinely open. What would close it is evidence that the safety dispute is fundamentally irresolvable, not just a negotiating position.

Sources

CNN — The White House recently reopened Anthropic talks after its Mythos model breakthroughs; frames the exclusion as temporary dispute
Breaking Defense — Technical detail on IL6 and IL5 classification levels; Oracle added to the list after initial announcement; defense procurement context
CNBC — Pentagon CTO Emil Michael says Anthropic's Mythos is a 'national security moment' requiring government-wide network hardening, distinct from the safety dispute
The Next Web — The sharpest framing: 'lawful operational use' deliberately replaces Anthropic's safety restrictions; the message is that any AI company with ethical limits will be replaced
DefenseScoop — Inside the DOD: the dispute culminated over ethical constraints on AI use in warfare and national surveillance; companies listed by name and network tier

Related