Banned by the Government, Worth $800 Billion
What happened
In February 2026, Defense Secretary Pete Hegseth designated Anthropic a 'supply chain risk to national security,' a label previously reserved for foreign adversary companies like Huawei, after Anthropic refused to allow its Claude models to be used for autonomous weapons or domestic surveillance. Trump simultaneously banned all federal agencies from using Claude. Anthropic sued. A San Francisco judge blocked the federal-wide ban in March; a DC appeals court upheld the DoD-specific blacklist in April. While the litigation continues, Anthropic's annualized revenue has grown from $9 billion at the end of 2025 to more than $30 billion, investors are circling at an $800 billion valuation (more than double its February round), and co-founder Jack Clark confirmed Anthropic briefed the Trump administration on its new Claude Mythos model, which can find software vulnerabilities that human researchers cannot.
The Pentagon's attempt to punish an AI company for having ethics produced the best marketing campaign in AI history, and now the company being sued by the government is briefing that same government on a model too dangerous to release publicly.
The Hidden Bet
The blacklist is a punishment that is working
Anthropic's revenue has tripled since the designation. Enterprise customers who avoided Claude for reputational reasons now have a clear differentiation signal: this is the AI company that said no to autonomous weapons. The blacklist is functioning as a quality certification, not a sanction. Federal agencies are reportedly sidestepping it anyway because there are things Claude does that they need.
Anthropic and the government are adversaries
While Anthropic sues the Pentagon, its co-founder is briefing the Trump administration on a classified-capability AI model. The company is simultaneously litigating over civil liberties and cooperating on national security. This is not contradiction; it is a negotiation conducted through parallel channels. The question is which channel wins.
An $800 billion valuation reflects the company's real trajectory
Anthropic's revenue trajectory is extraordinary, but it is happening during a period of uniquely high demand for AI models when the enterprise market has not yet consolidated. The companies spending $1M+ annually on Claude are early adopters making bets on a technology cycle, not steady-state customers. If the cycle consolidates around fewer providers, the revenue curve that justifies $800B may not be sustainable.
The Real Disagreement
The real fork is whether AI safety labs should negotiate with governments that want to use their models for purposes the labs consider harmful, or maintain a hard refusal and accept the regulatory consequences. Anthropic's position has been to refuse, sue, and simultaneously cooperate. OpenAI's implicit position has been to comply early and broadly. Anthropic's revenue surge suggests the market rewards the refusal posture. But Anthropic is now briefing the administration on a model it has described as too dangerous for public release. If that briefing leads to government use of Mythos for the purposes Anthropic refused to allow with Claude, the refusal will have been a negotiating tactic, not a principle. I think the distinction between Claude and Mythos is real, and the briefing is a sign of Anthropic trying to maintain influence over how the capability is used rather than ceding that influence to less safety-conscious actors. But it is a very fine line, and the government knows that Anthropic needs this relationship more than it admits.
What No One Is Saying
Anthropic is suing the government while briefing it on the most powerful model the company has built. Every dollar of the $800 billion valuation is priced on the assumption that this contradiction resolves in Anthropic's favor. If the litigation ends with a settlement that includes government access to Mythos under the same terms Anthropic rejected for Claude, the company will have spent a year in court to end up exactly where Hegseth wanted.
Who Pays
Federal agencies using Claude in non-DoD contexts
Immediate; procurement officers are already managing this split
The appeals court upheld the DoD blacklist while the broader federal ban was blocked. Agencies navigating this split are in legal ambiguity about their procurement decisions, creating a compliance burden on staff and slowing AI adoption where the government could benefit.
AI safety research community
Medium-term; the incentive effect accumulates across the next generation of AI company formation
If the precedent is established that AI companies that refuse to cooperate with military uses get blacklisted while companies that comply grow faster, the incentive for future AI labs is clear. Safety-focused development becomes a competitive disadvantage unless there is a premium market for it, as there appears to be now but may not always be.
Scenarios
Litigation victory
Courts ultimately strike down both the DoD designation and the broader ban. Anthropic regains full federal market access. Revenue grows past $50B. IPO at a valuation that makes the $800B offer look conservative.
Signal DC Circuit reverses the appeals court panel ruling on the DoD designation, granting Anthropic's challenge
Settlement and access
Anthropic and the administration negotiate a settlement that grants DoD access to Mythos under conditions Anthropic controls: no autonomous weapons, human oversight required. Anthropic drops the suit. The principle is preserved nominally, the access is granted practically.
Signal Clark or Amodei participates in a White House AI summit and the litigation is quietly stayed
Regulatory entrenchment
Congress passes legislation giving DoD broader authority to restrict AI company operations, making the executive designation more durable. Anthropic's legal path closes. It exits the US federal market entirely and pivots toward the EU, which is already citing the episode in its own AI governance debates.
Signal A bipartisan AI national security bill passes committee with a supply chain designation provision
What Would Change This
If Anthropic's revenue growth slows or reverses in Q2 2026, the market-rewards-refusal thesis collapses and the blacklist is working as intended. If the Mythos briefing leads to a disclosed agreement giving DoD access, the litigation was a negotiating posture all along.