The AI That Can Break Into Banks Exists. The Rules to Stop It Don't.
What happened
Anthropic's Claude Mythos model, not publicly available, scored 73% on expert-level cyberattack tasks and became the first AI to complete a simulated 32-step enterprise network breach. Financial regulators at the Federal Reserve, Bank of England, and national cybersecurity agencies convened emergency meetings to assess the implications. Mythos also identified thousands of previously unknown high-severity vulnerabilities in major operating systems. Treasury Secretary Scott Bessent publicly championed the model as proof of America's AI lead over China, estimating that lead at 3-6 months.
A privately controlled AI that can break into bank networks at expert level now exists. The government that owns no part of it is using it as a geopolitical talking point. The regulators who need to contain the risk have no authority to do so.
The Hidden Bet
Anthropic controls who can access Mythos, which contains the risk
Selective deployment is not a governance mechanism; it is a promise. Anthropic faces no legal obligation to maintain that selectivity. The model can be licensed, leaked, stolen, or replicated by adversaries who have now seen what is publicly benchmarked. The 3-6 month lead Bessent cited is the exact window in which adversaries will try to close the gap.
Financial regulators have the tools to respond to this kind of AI threat
Bank regulators are designed to oversee capital ratios, liquidity, and fraud. They have no authority over AI development practices at private companies. Emergency meetings produce threat assessments, not binding rules. The Bank of England can tell banks to patch their vulnerabilities; it cannot tell Anthropic what to build.
Bessent's endorsement represents a coherent US government position on Mythos
Anthropic has been vocal about resisting military uses of its technology. A Treasury Secretary publicly branding your dangerous AI model as a national security asset is not coordination; it is a claim staked without the company's consent. Anthropic can neither fully rebut it nor fully endorse it, leaving its own policy position incoherent.
The Real Disagreement
The core fork is between speed and safety: the US government's instinct is to treat Mythos as a competitive advantage to preserve at all costs, including accepting governance gaps. The opposing position is that an AI this capable, controlled by a single private company with no external oversight, poses a systemic risk to critical infrastructure that outweighs any short-term geopolitical edge. The difficulty is that both positions are defensible. If the US slows down development while China accelerates, the capability gap closes and the threat is no smaller. If the US races ahead with no governance, the capability exists domestically without guardrails. The lean is toward governance first: Bessent's 3-6 month lead estimate is exactly the kind of number that motivates adversaries, not deters them. Publishing it may have already compromised more than it signaled.
What No One Is Saying
The emergency meetings at the Fed and the Bank of England are largely performative. Regulators convening is what regulators do when they are scared and have no tools. What would actually change the risk profile is Anthropic agreeing to external audits, third-party access controls, or some form of international verification. None of those conversations have been reported as happening.
Who Pays
Banks and financial institutions with legacy IT infrastructure
Risk is immediate for organizations that have not yet patched the disclosed vulnerabilities; broader risk horizon of 12-24 months as adversarial AI capability catches up.
Mythos identified vulnerabilities in existing systems that are expensive and slow to patch. A state actor or criminal organization that replicates or approximates Mythos capabilities could exploit those vulnerabilities before institutions complete remediation.
Countries without frontier AI capability
The vulnerability is present now and compounds as the model is refined or replicated.
If Mythos represents a genuine 3-6 month US lead over China, it also represents an unbridgeable gap over every other nation. Countries that are targets of US intelligence or strategic competition face an AI-enhanced threat they have no symmetric capability to counter.
Scenarios
Governance vacuum persists
No new binding rules emerge from the regulatory meetings. Anthropic continues selective deployment. A near-equivalent model is demonstrated by a state actor within 18 months, and the window for setting norms before proliferation closes.
Signal No legislative or regulatory proposal referencing Mythos-class capabilities within 90 days of today.
International AI security framework accelerates
The combination of Mythos revelations and financial regulator alarm creates political will for an international agreement on AI capability disclosure and access controls, similar in structure to nuclear safeguards. Progress at the next G7 AI safety summit.
Signal A joint statement from US and UK financial regulators calling for binding AI capability disclosure standards.
Adversarial replication
A Chinese or Russian state program uses Mythos's publicly benchmarked capabilities as a target specification and achieves comparable performance within the 3-6 month window. The geopolitical lead evaporates and the risk is no longer asymmetric.
Signal Any foreign government publication of cyberattack benchmark scores approaching Mythos levels, or a significant financial infrastructure breach attributed to AI-assisted methods.
What Would Change This
If Anthropic agreed to an external audit of Mythos's access controls and deployment decisions, or if financial regulators produced a binding rule requiring AI capability disclosure before deployment in critical infrastructure contexts, the governance gap would begin to close. Until then, the story is a private company holding a capability that central banks say is systemically dangerous, with no mechanism for oversight.
Related
Banned by the Government, Worth $800 Billion
powerOpenAI Wants Immunity. Anthropic Wants Accountability. One AI Bill Will Decide Which Vision Wins.
powerThe Iran War Is Costing the World Enough to Buy a Recession. Nobody Is Presenting the Bill.
powerTrump Threatens to Fire Powell. The Market Says He Won't Get What He Wants Either Way.