← May 4, 2026
tech power

Six Hundred Google Employees Signed a Letter Against the Pentagon AI Deal. Google Signed the Deal Anyway.

Six Hundred Google Employees Signed a Letter Against the Pentagon AI Deal. Google Signed the Deal Anyway.
SiliconAngle / Unsplash

What happened

Google signed a classified agreement with the US Department of Defense allowing the Pentagon to deploy its Gemini AI models for any lawful government purpose, including sensitive military applications. More than 600 Google employees, including over 20 directors and vice presidents, signed an open letter urging CEO Sundar Pichai to reject the deal. Pichai signed it anyway. The deal includes a provision allowing the government to modify Google's AI safety filters at its request, and does not give Google veto rights over lawful operational decisions. Simultaneously, Google withdrew from a separate $100 million DOD competition to build voice-controlled autonomous drone swarm technology. In 2018, a similar employee revolt over Project Maven forced Google to exit a Pentagon drone AI contract. This time, the revolt produced no change.

Google has decided that the political risk of refusing a Pentagon contract is higher than the reputational risk of employee backlash, and that judgment is almost certainly correct in the current environment.

The Hidden Bet

1

Google's withdrawal from the drone swarm competition is evidence that employee concerns about weaponized AI still carry some weight.

Google withdrew from the drone swarm competition not because of employees but because it determined it was unlikely to win and the reputational exposure from participating was not worth the business risk. Exiting a competition you probably lose is not a concession to internal dissent.

2

The 'any lawful government purpose' clause is standard boilerplate that does not meaningfully expand military AI use.

Anthropic exited its Pentagon deal specifically over this phrase. Anthropic's interpretation was that 'lawful government purpose' in a classified context means the government alone decides what is lawful, with no external oversight. If Anthropic's reading is correct, Google just agreed to AI deployment conditions that its own safety frameworks cannot audit or constrain.

3

Employee leverage in tech has declined temporarily due to labor market conditions.

The 2018 Project Maven revolt worked because Google's business at the time was overwhelmingly dependent on consumer trust and advertiser sentiment, both of which employees could threaten. Google's 2026 business is increasingly dependent on cloud contracts, enterprise AI licensing, and government deals. Employee dissent in this model has almost no leverage over the revenue streams that matter most. This is a structural shift, not a cyclical one.

The Real Disagreement

The fork is between two legitimate positions on the relationship between AI companies and democratic governments: either AI capabilities should be available to governments for national security purposes with the same access conditions as any other strategic technology, or AI companies bear a special responsibility to refuse deployments they cannot audit, because AI that can be weaponized in classified settings by definition operates outside any accountability framework the company can observe. Google chose the first position. Anthropic chose the second and was replaced. The market outcome is clear: refusal costs Anthropic the contract. The ethical outcome is less clear: someone is providing AI to the Pentagon's classified networks regardless of whether Google does it, and Google argues its participation gives it more influence over how the technology is deployed than its absence would.

What No One Is Saying

The safety clause allowing the government to modify Google's AI filters at its request is the critical provision, and it has received almost no coverage relative to the headline contract. If the government can modify the safety filters on a classified AI deployment, then Google's internal safety work is not governing how its models behave in the most consequential applications. Google's AI safety team built those filters. The clause means those filters can be turned off.

Who Pays

Google employees who signed the letter, particularly senior signatories

Immediate and ongoing, invisible in official communications

Fortune reports that Google has been conducting a quiet internal campaign against activist employees since the 2018 Maven fallout. Directors and VPs who signed face career consequences that will not be publicly announced but will be visible in promotion and assignment decisions.

Anthropic

Ongoing, with definitive resolution only visible after specific deployment decisions

Anthropic drew a line over 'lawful government purpose' and lost the Pentagon contract. Google, OpenAI, and xAI drew no such line and won the contracts. If Anthropic's safety concerns prove correct, Anthropic bears reputational benefit in the long run. If Google and OpenAI's models are deployed without incident, Anthropic's position looks like competitive posturing rather than principled refusal.

People affected by AI-assisted military operations that Google cannot audit

Unspecified, in proportion to how heavily the DOD relies on AI for operational decisions

The classified nature of the deployment means there is no public record of what Gemini is used for, what errors it makes, or what decisions it influences. If the model contributes to errors in targeting, logistics, or intelligence analysis, those errors will not be attributable to any specific company decision.

Scenarios

Business as usual

The deal proceeds, Gemini is deployed in classified settings, no public incident occurs. Anthropic eventually re-engages the Pentagon on different contract terms. The activist employees who signed letters either leave Google or absorb the outcome and continue. Defense AI becomes normalized across the industry.

Signal No public disclosure of an AI-related incident from the Pentagon's classified networks within 24 months. Anthropic signing a modified deal with the DOD.

Public incident

An AI-assisted military decision produces a documented error with civilian casualties or a significant intelligence failure, and internal documents eventually surface showing which AI system was involved. The incident transforms the debate from ethics to liability and triggers congressional hearings.

Signal A congressional investigation requesting documents from AI companies about classified DOD deployments. A whistleblower disclosure.

Employee exodus creates competitive pressure

Senior AI researchers resign over the classified deal in sufficient numbers to meaningfully affect Google DeepMind's capabilities on specific research tracks. Competitors who have not signed classified deals use it as a recruiting argument for safety-focused researchers.

Signal A cluster of prominent AI researcher departures from Google in the next six months, with stated reasons referencing the Pentagon deal.

What Would Change This

A public disclosure of the specific terms of the safety filter modification clause, showing either that the government's ability to modify filters is constrained by specific conditions or that it is truly unconditional, would settle the central ethical question. The classified nature of the deal ensures no such disclosure will occur voluntarily.

Sources

Fortune — Direct comparison with 2018 Project Maven revolt. Argues employee leverage has structurally declined: the tech labor market is tighter, Google has already reduced its reliance on vocal activist employees, and leadership has a clear appetite for defense contracts.
SiliconAngle — The letter itself and its signatories. Notably, more than 20 directors, senior directors, and VPs signed. The letter specifically cited opacity concerns: classified workloads by definition cannot be reviewed for harms.
Tom's Hardware — Notes the simultaneous contradiction: Google signed the classified Gemini deal but withdrew from the $100 million drone swarm competition. The deal includes a clause allowing the Pentagon to modify Google's AI safety filters at the government's request.
CNET — Describes the deal as allowing the DOD to use Gemini for 'any lawful government purpose,' the same phrase that caused Anthropic to exit its Pentagon deal. Notes Google joins OpenAI and xAI in having signed similar classified agreements.

Related