← April 28, 2026
tech power

Google Signs Away Its Ethics Policies

What happened

Google and the Department of Defense have agreed on a new AI services contract that permits military use of Google AI for 'any lawful purpose.' The deal marks a significant reversal from Google's 2018 public commitment not to develop AI for weapons or surveillance systems that violate international norms, a pledge made after thousands of employees protested Project Maven. The contract covers cloud, AI, and data analytics capabilities. Google has not publicly named a price or scope. The announcement came over the objections of current Google employees who say the company did not consult staff before signing.

Google just sold the principle that AI ethics can constrain a business deal. Every future ethics policy the company publishes is now optional by definition.

The Hidden Bet

1

The phrase 'any lawful use' limits the military to activities that are legal under US and international law

The US government's definition of what is lawful in military contexts is determined by the same entity buying the service. Drone targeting, surveillance, and autonomous weapons systems are all currently 'lawful' under US military doctrine. The constraint is circular.

2

This deal will stay limited to non-weapons applications like logistics, personnel, and data analysis

The contract language does not exclude weapons systems. Google's previous Maven contract, which triggered the 2018 protest, was framed as narrow video analysis that turned out to power targeting decisions. Scope creep is not hypothetical: it is the documented history of the relationship.

3

Google's employee protest capability is a meaningful check on company decisions

The 2018 protest worked because it went public, attracted press, and threatened Google's ability to hire. The labor movement since then has been suppressed: Google fired key organizers in 2019-2020, restricted internal discussion, and tightened contractor classification. The structural conditions for another successful walkout no longer exist.

The Real Disagreement

The fork is about whether large AI companies have special obligations because of the power asymmetry between AI capabilities and democratic oversight of military use. The 'yes' side says Google is powerful enough to set conditions on how its technology is used and has an obligation to do so, especially since the defense procurement process has no meaningful AI ethics review. The 'no' side says Google is a vendor, not a regulatory body, and refusing DoD contracts does not reduce US military AI capability: it just shifts the work to Microsoft, Amazon, or Palantir. The honest version of the 'no' argument is that Google's participation might actually produce better-governed AI than handing the field to pure defense contractors. That is a real point. But it depends on Google actually enforcing ethical conditions it has now voluntarily removed from its contract language.

What No One Is Saying

Google's 2018 ethics principles were partly a hiring tool. Competing with Meta and OpenAI for top AI researchers required a credible public commitment that Google was not a weapons company. Now that the talent market has tightened, that signaling cost has dropped, and the contract is available.

Who Pays

Google AI researchers hired under the 'do no evil' employment brand

Immediately, as the contract activates

Their labor is now directly funding military applications they joined the company to avoid. They cannot renegotiate their compensation against a policy that no longer exists.

International populations in countries where US military operations use Google AI for targeting or surveillance

Ongoing, as capability is deployed

Expanded AI capability deployed under 'any lawful' doctrine without independent oversight of what lawful means in specific theater contexts

Smaller AI companies and research labs that maintained ethics commitments

Over the next 12-18 months as other companies update their own policies

The competitive pressure to drop ethics policies increases significantly when the largest player signals that ethics constraints are negotiable at sufficient scale

Scenarios

Ethics Laundering

Google publishes a formal internal AI ethics review process for government contracts that produces approvals for everything the DoD requests. The process exists to absorb criticism rather than constrain decisions.

Signal Google announces a government AI ethics board with no veto power and no public reporting obligation within six months of the contract signing

New Maven Moment

A specific application of the contract becomes publicly visible, triggering a new employee protest. Unlike 2018, the company does not cancel the contract but does offer internal transfer options for objecting employees.

Signal An investigative report ties specific Google infrastructure to a named military operation with civilian casualties

The DoD Buys the Stack

The contract becomes a template. Microsoft, Amazon, and Oracle sign similar 'any lawful use' agreements within 18 months, and the effective ethics constraint on military AI drops to zero across the major cloud providers.

Signal AWS or Azure announces a materially similar contract without an ethics carve-out within the next year

What Would Change This

Evidence that the contract includes binding restrictions on specific use cases, with independent third-party verification and genuine enforcement mechanisms, would substantially change the bottom line. A contract that actually excluded targeting systems, autonomous weapons, and surveillance of civilian populations would be a meaningful constraint. The current reporting suggests none of those exclusions exist.

Sources

The Verge — Reported the deal terms, noted employee opposition, and contextualizes Google's reversal from its 2018 Project Maven walkout
Hacker News — Community discussion surfaced Google staff concerns and the contrast with the 2018 employee protest that killed the original Maven contract

Related