OpenAI Gives the Government Access to Its Cyber Weapon. Anthropic's Already Leaked.
What happened
OpenAI held a classified demo in Washington this week for approximately 50 federal cyber practitioners from national security agencies, briefing them on GPT-5.4-Cyber under its Trusted Access for Cyber program. The model, released last week under a tiered access scheme, is being offered in two variants: a more constrained public version and a more permissive version for vetted defenders. OpenAI is briefing Five Eyes allies through the same vetting process. In parallel, Anthropic's rival Mythos model, restricted to a consortium of 40 elite companies due to its offensive capabilities, was accessed by unauthorized users through a third-party vendor environment on the day Anthropic first disclosed the program publicly.
Two labs are racing to embed offensive-grade AI into government security infrastructure before anyone has agreed on who should control it, under what rules, or what happens when it leaks. One of them already leaked.
Prediction Markets
Prices as of 2026-04-22 — the analysis was written against these odds
The Hidden Bet
The 'dual-track' model keeps offensive capabilities contained
Anthropic's Mythos breach shows that the attack surface is vendor chains, not the core model. Every additional vetted partner is a new attack vector. OpenAI's 'Trusted Access' program expands that surface deliberately.
Governments are the right bodies to receive these capabilities first
Local water utilities and regional infrastructure operators are the most vulnerable targets, but also the least able to properly secure advanced AI tools. Giving them access first, as OpenAI's Chris Lehane suggested, trades coverage for control.
Defensive use cases and offensive capabilities can be cleanly separated in a single model
A model that can autonomously chain zero-days for defense can do the same for offense. The distinction lives in the instructions given to the model, not in the model itself. Any actor who gains access gains both.
The Real Disagreement
OpenAI chose broad access with safeguards. Anthropic chose narrow access without safeguards being sufficient. The fork is real: maximum coverage of defenders means maximum exposure if the chain breaks. Narrow access means faster attackers beat defenders to the tool. OpenAI's bet is that the safeguards hold; Anthropic's experience this week suggests they don't. On balance, Anthropic's restricted approach is right in principle but failed in execution, which suggests the actual choice is between two versions of inadequate control.
What No One Is Saying
Both companies are positioning government access as a public safety measure. But briefing the NSA and Five Eyes on your most powerful offensive tool is also the fastest path to avoiding future regulation. If the government depends on your product for national security, it will not regulate you out of the market.
Who Pays
Small and mid-size infrastructure operators
Within 12 months of wider rollout
They will be last in line for vetted access but first in line as attack targets for adversaries who obtained the same model through a vendor breach.
Third-party vendors and contractors in the cybersecurity supply chain
Immediate and ongoing
They become the most exploitable link. The breach of Anthropic's Mythos came through a vendor environment, not the lab itself. Every expansion of vetted access multiplies the vendor surface.
Smaller AI security startups
Over the next two years
Once OpenAI and Anthropic are embedded in federal cyber infrastructure, procurement naturally flows to those relationships. The government contracting advantage compounds.
Scenarios
Race to embed
Both OpenAI and Anthropic sign multi-year federal cyber contracts. Congress does not pass AI security legislation before the contracts are in place. The government becomes structurally dependent on frontier lab access before any oversight framework exists.
Signal First formal federal procurement contract with either lab, expected by Q3 2026.
Breach forces restriction
A second Mythos-style incident, or a GPT-5.4-Cyber model exfiltration through a vetted partner, triggers a public congressional hearing. Access programs are frozen pending an executive review.
Signal Any credible claim of unauthorized use by a non-vetted actor who obtained access through a third party.
Allies diverge
European Five Eyes partners, particularly the UK and Canada, impose domestic restrictions on using US frontier AI in sovereign cyber programs after the Mythos incident. The Five Eyes cyber-AI sharing arrangement fractures.
Signal Any formal statement from GCHQ, CSE, or ASD declining to participate in OpenAI's Trusted Access program.
What Would Change This
If OpenAI or Anthropic published the full technical specification of their access controls, the independent audits, and the breach response protocols in detail, it would be possible to assess whether the safeguards are real or theatrical. The current opacity makes that judgment impossible.