← April 23, 2026
tech power

Anthropic Built a Cyberweapon and Decided Not to Sell It

Anthropic Built a Cyberweapon and Decided Not to Sell It
Ars Technica

What happened

Anthropic's most powerful model, Claude Mythos Preview, was revealed via a data leak in late March 2026 after the company tried to keep it secret. Mythos can autonomously discover and chain software vulnerabilities at a scale and speed no human team can match, effectively automating the most dangerous parts of offensive hacking. Anthropic has restricted access to a small set of vetted organizations and has signaled it will not release Mythos publicly. Despite the Pentagon classifying Anthropic as a supply chain risk and replacing it with OpenAI on its primary AI contract, the NSA is separately using Mythos to stress-test sensitive government systems. Microsoft has simultaneously announced it will integrate Mythos into its security development program.

Anthropic has decided the model is too dangerous to release, then handed it to the NSA, which is exactly the actor that would use it offensively. The containment strategy and the deployment strategy are the same strategy.

Prediction Markets

Prices as of 2026-04-23 — the analysis was written against these odds

The Hidden Bet

1

Withholding Mythos from the public keeps it out of the wrong hands.

The model was leaked before it was officially announced. Restricted access to 40 tech firms and select government agencies means dozens of organizations have it, and each of those organizations is a potential breach vector. There is no scenario where a model this capable stays contained indefinitely.

2

The NSA using Mythos defensively, to stress-test systems, is meaningfully different from using it offensively.

Stress-testing a system means running the same attack sequence an adversary would run. There is no technical distinction between a defensive vulnerability scan and an offensive one. The same tool, the same operation, different label.

3

The Pentagon supply chain risk designation is a security judgment about Anthropic.

The Pentagon switched to OpenAI for its primary contract, and OpenAI is Microsoft-backed. Microsoft is now integrating Mythos. The designation may reflect procurement politics rather than a genuine security assessment of relative risk.

The Real Disagreement

The actual fork is between two propositions that both seem defensible: either powerful AI cybertools should be developed under strict institutional control so that only trusted actors hold them, or developing them at all is the problem because no actor is trustworthy enough and the leak risk is irreducible. The first position says Anthropic did the responsible thing by restricting access. The second says the responsible thing was to not build it. Anthropic chose the first, but the NSA deployment makes it look like the second was correct. If you had to lean: the second position is harder to act on commercially but harder to argue against technically. What you give up is the genuine defensive value of finding vulnerabilities before adversaries do.

What No One Is Saying

Anthropic's internal assessment concluded the offensive side is iterating faster than defenders in the current phase of AI development. If that is true, then restricted access to Mythos does not slow down attackers who build their own equivalent. It only slows down defenders who might otherwise use Mythos to harden their systems. The access policy is backward.

Who Pays

Critical infrastructure operators without Mythos access

Ongoing, accelerates as Mythos-equivalent tools proliferate

Adversaries with access to Mythos-equivalent capabilities can find and chain vulnerabilities in power grids, financial systems, and hospital networks faster than those organizations can patch. The gap between attacker capability and defender capability widens precisely for entities that did not make Anthropic's approved list.

Anthropic

Immediate and compounding

The company is now simultaneously designated a supply chain risk by the Pentagon and trusted enough for the NSA to run its most sensitive offensive AI tool. That contradiction makes it a target for both political pressure from the defense establishment and reputational attacks from the AI safety community.

Scenarios

Controlled proliferation

Mythos-equivalent models spread to a manageable set of vetted actors over 18 months. Defenders use them to dramatically improve vulnerability detection. A new regulatory framework emerges governing AI-assisted offense and defense.

Signal Congressional hearings produce actual legislation with enforcement mechanisms, not just testimony.

Leak and escalation

A Mythos-equivalent capability reaches a state-level adversary or criminal organization through a breach of one of the 40 approved firms. A major infrastructure attack is attributed to AI-assisted exploit chaining. Anthropic faces liability questions and further government scrutiny.

Signal A significant cyberattack is publicly attributed to automated vulnerability chaining with characteristics consistent with Mythos capabilities.

Race to release

Competitors build Mythos-equivalent models and release them openly or semi-openly, reasoning that the cat is already out of the bag. Anthropic's restricted-access position collapses because its strategic advantage disappears when competitors publish.

Signal A major AI lab releases a cybersecurity-focused model with comparable capability claims and no access restrictions.

What Would Change This

If it emerged that Mythos's defensive applications have demonstrably hardened more systems than adversaries have compromised using equivalent capabilities, the access restriction policy would look correct. That evidence does not currently exist and may never be public.

Sources

Malwarebytes — Technical breakdown of Mythos capabilities: autonomous vulnerability chaining, why it exceeds existing tools, and Anthropic's internal risk assessment that the offensive side is winning the AI adoption race.
Ars Technica — Broader industry reaction: fears of lowering the skill floor for attackers, what makes Mythos qualitatively different from prior security AI tools.
TechCrunch — The government contradiction: NSA is running Mythos to stress-test systems while the Pentagon has labeled Anthropic a supply chain risk and handed that contract to OpenAI.
Reuters — Microsoft is integrating Mythos into its security development program, which sits awkwardly alongside the Pentagon supply chain designation.

Related