Eight AI Companies Are Now Inside the Pentagon's Classified Networks
What happened
The Department of Defense signed formal agreements on May 1 with eight AI companies: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, Amazon Web Services, and Oracle. The agreements authorize these firms to deploy their AI models on DoD's classified IL6 and IL7 network environments, which handle secret and top-secret information. Separately, OpenAI disclosed that it provided early access to its GPT-5.5 model to the US government for national security evaluation. Anthropic, which was excluded from the agreements after a dispute over military use terms and ethical constraints on AI in warfare, is not among the eight. The same day the classified agreements were announced, the Center for AI Standards and Innovation expanded pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI.
The Pentagon just built an AI vendor ecosystem that explicitly excludes the one company most publicly committed to AI safety constraints, and rewarded the companies willing to operate without those limits.
The Hidden Bet
The Anthropic exclusion is primarily about safety standards.
Anthropic's refusal to sign was framed in terms of ethical constraints on AI in warfare. But the practical outcome is that Anthropic has no access to DoD's most sensitive operational systems, while every major competitor does. If the next generation of AI capability is shaped partly by what defense contractors are willing to train on and test against, Anthropic is now structurally excluded from that feedback loop. The exclusion may matter more commercially than ethically.
IL6/IL7 classification means these systems are being used only for back-office and logistics.
IL6 covers secret information and IL7 covers information 'above top secret.' Giving commercial AI models access to this environment is not an administrative efficiency play. The DoD's own press release frames it as 'decision superiority across all domains of warfare.' These systems are being positioned for operational use in active conflicts, not procurement planning.
Including eight companies means the DoD is maintaining competitive diversity.
The list includes companies with deeply interlocking relationships: Microsoft owns a major stake in OpenAI, Google and Amazon are competing cloud providers who also train competing models, and Elon Musk's SpaceX and xAI are simultaneously military contractors and competitors to the other listed firms. The appearance of diversity may mask the formation of a tight cartel with shared access to the world's most classified AI training environment.
The Real Disagreement
The genuine tension is between two positions that cannot both be right: advanced AI deployed in classified military operations needs the fastest, most capable models available without ethical constraints that adversaries do not observe; and AI systems operating in lethal decision chains require external accountability that commercial companies with government contracts will not voluntarily impose. The first position is what the DoD just acted on. The second is what Democratic lawmakers and Anthropic represent. The hard part is that the first position is probably correct about capability competition and probably wrong about long-run risk. Integrating AI into classified warfare infrastructure under commercial agreements with no congressional oversight and no public accountability framework is exactly how you get to an incident that forces a much more extreme response later.
What No One Is Saying
OpenAI providing GPT-5.5 to the US government for national security evaluation before releasing it publicly is a permanent change in the relationship between frontier AI labs and government. It is not a one-time event. It means the government now has a de facto pre-release access right to the most capable AI systems, which changes the incentive structure for every lab: the path to classified contracts runs through early government disclosure. Anthropic's exclusion makes this point sharper, not softer.
Who Pays
Google employees who previously objected to Project Maven
Already happening: internal dissent at Google was reported but did not stop the agreement.
Google's agreement to military AI classified deployment is a direct reversal of the employee revolt that killed Project Maven in 2018. The employees who forced that reversal now work at a company that has signed an IL7 agreement. There will be no comparable revolt this time: the framing is national security, the Iran war is ongoing, and the political environment has shifted.
Countries and individuals targeted by AI-assisted military decisions
Ongoing, in active operational environments.
Commercial AI models operating in top-secret environments will inform or assist targeting, logistics, and intelligence decisions with no public accountability framework. The errors, biases, and failure modes of these models in combat contexts are not being disclosed and may not be visible until they produce an incident.
Scenarios
Permanent integration
These agreements become the foundation for permanent commercial AI vendor relationships inside classified DoD infrastructure, accelerating capability development for both parties and cementing the eight companies as the dominant players in national security AI.
Signal Watch for contract values and multi-year extensions being announced; any company on the list winning a classified AI contract over $1 billion.
An incident forces oversight
A classified AI system deployed in an active conflict produces a significant error, targeting incident, or operational failure. Congressional pressure forces a review, and the informal agreements give way to a formal regulatory framework with mandatory auditing.
Signal Watch for Democratic committee chairs requesting classified briefings on AI use in the Iran theater or any other active military operation.
Anthropic rejoins under different terms
Commercial pressure from being locked out of the dominant AI infrastructure market forces Anthropic to renegotiate. It joins the classified agreement ecosystem under terms that let it maintain some ethical use language while ceding the practical operational restrictions it previously insisted on.
Signal Watch for Anthropic leadership language shifting from 'we refused' to 'we are in discussions.'
What Would Change This
If evidence emerged that these systems had been used to assist targeting decisions in the Iran conflict or elsewhere, and that those decisions led to civilian casualties, the political calculus around military AI oversight would shift fast. The current arrangement is sustainable only if nothing goes publicly wrong.