← April 12, 2026
tech power

The Model You Cannot Use

The Model You Cannot Use
Reuters

What happened

Anthropic announced that its newest model, Claude Mythos, will not receive a public release. Instead, roughly 40 organizations including Google, Microsoft, Amazon, JPMorgan, Apple, Cisco, and Nvidia are being given access to a restricted version called Claude Mythos Preview through a program called Project Glasswing. The stated reason: Mythos can automate vulnerability discovery at a pace and scale that would make it a powerful offensive weapon in the wrong hands. Internal testing showed the model reproduced and exploited software vulnerabilities in over 80% of cases, finding flaws in every major operating system and web browser. UK financial regulators are now holding emergency sessions with the Bank of England and major banks to assess the risk; the US Treasury already briefed Wall Street last week.

The 40 companies inside Project Glasswing did not get access to a safety tool: they got a private monopoly on the most capable offensive hacking engine ever built, licensed as defense.

The Hidden Bet

1

Anthropic withheld Mythos primarily to prevent public harm from its hacking capabilities.

Every company inside Project Glasswing is also a direct Anthropic commercial partner or cloud customer. A model that can find zero-days across every major OS is worth tens of billions in defensive contracts. Keeping it controlled keeps the price high and the competition locked out.

2

The 80% exploit reproduction rate is independently verified and accurately represents Mythos's real-world offensive capability.

All performance claims come from Anthropic's own 244-page system card with no third-party benchmark. Yann LeCun called the framing 'BS from self-delusion,' and TechCrunch noted the figures cannot be independently checked. A model that is only 60% as capable as claimed would still justify the restricted release but would not justify the emergency regulatory meetings.

3

Restricting Mythos from public release effectively contains the offensive capability it represents.

Hugging Face data shows Chinese labs now account for 41% of open-source model downloads. If a competitive model with similar cyber capabilities emerges from a lab that does not practice Anthropic-style containment, the safety case for withholding Mythos collapses retroactively while Anthropic's 40 partners still hold the advantage.

The Real Disagreement

The real fork is this: either AI labs should gate capabilities whose offensive potential exceeds any plausible defensive use-case, or gating capability is always and only a competitive move dressed in safety language, and the correct response is open publication so defenses can be built by everyone. Both arguments are coherent. The honest tension is that Anthropic cannot prove its motives are pure, and critics cannot prove they are corrupt. The market is pricing Anthropic as the top AI lab at 89.5% confidence through April, which suggests the world is treating the safety framing as credible. I'd lean toward skepticism: a 244-page system card describing an AI's 'healthy neurotic organization' and its 'primary emotional states of curiosity and anxiety' is an unusual preamble for a boring infrastructure security announcement. It reads like a company controlling its own mythology.

What No One Is Saying

The Glasswing coalition is the exact set of companies that would need to pay most to patch vulnerabilities Mythos could discover independently. Anthropic has structured an arrangement where its largest cloud and enterprise customers are simultaneously protected from the threat Mythos represents and dependent on Anthropic to manage that threat. That is not a safety program. It is the most effective enterprise lock-in mechanism in the history of software.

Who Pays

Security researchers outside the Glasswing coalition

Immediate and ongoing

The best vulnerability-finding tool ever built is inaccessible to independent researchers who cannot afford commercial relationships with Anthropic, creating a two-tier security ecosystem where defenses improve faster for large institutions than for everyone else.

Smaller software companies and open-source projects

Over the next 12-18 months as Glasswing deployments scale

Mythos will identify critical vulnerabilities in their products and patch them for Glasswing partners. Non-partners will not know what was found or when, potentially remaining exposed after patches are deployed elsewhere.

OpenAI, xAI, Google

Competitive damage compounds over 2026

Anthropic has used safety framing to occupy the most defensible position in the AI power hierarchy: the only lab mature enough to build something dangerous and responsible enough not to release it publicly. That story, if it holds, is worth more than any benchmark.

Scenarios

Safety story holds

No major breach or offensive exploit traced to Glasswing access. Regulators in the US and UK formalize a framework that treats Mythos Preview as the model for responsible deployment of dangerous AI. Anthropic's commercial position strengthens.

Signal Six months with no reported offensive uses of Mythos-class capabilities and active EU/UK regulatory endorsement of Project Glasswing.

Competitor catches up

A Chinese or open-source lab releases a model with comparable vulnerability-discovery capability within six months. Anthropic's safety premium evaporates. The 40 Glasswing partners now have a defensive advantage over everyone else, but Anthropic's moral authority argument collapses.

Signal A Hugging Face release from DeepSeek or Qwen scoring above 70% on the same exploit-reproduction benchmarks.

Glasswing access leaks

One of the 40 Glasswing partners suffers a breach or an internal employee misuses Mythos Preview access. The safety-through-exclusivity argument is falsified. Anthropic faces regulatory pressure and reputational damage proportional to whatever harm occurs.

Signal A zero-day exploit appearing in the wild that traces back to vulnerability data only discoverable at Mythos's claimed capability level.

What Would Change This

Evidence that Anthropic's safety team independently blocked commercial use of Mythos in cases where Glasswing partners requested it, documented in verifiable third-party audits. That would shift the balance of evidence toward genuine safety motivation. Short of that, the commercial alignment between Anthropic and every Glasswing partner is too clean to dismiss.

Prediction Markets

Prices as of 2026-04-12 — the analysis was written against these odds

Sources

Reuters — UK financial regulators calling emergency meetings with Bank of England and major banks to assess cyber risks from Mythos; Treasury Secretary Bessent already briefed Wall Street.
Anthropic — Official Project Glasswing announcement: coalition of Amazon, Apple, Cisco, Google, JPMorgan, Microsoft, Nvidia and others given controlled access to Mythos Preview for defensive cybersecurity only.
Technosapien (Substack) — Skeptical breakdown: raises whether limited release is about internet protection or preventing competitors from distilling Mythos into cheaper models. Notes Yann LeCun and David Sacks questioning the safety framing.
NPR — Contextualizes Mythos within broader trend of AI models getting faster at finding security bugs; explains the technical achievement of 80% exploit reproduction rate.
Don't Worry About the Vase (Substack) — Takes safety concerns seriously; argues the controlled Glasswing approach is the right call given the model's demonstrated capability to automate vulnerability discovery at scale.

Related