OpenAI Wants Immunity. Anthropic Wants Accountability. One AI Bill Will Decide Which Vision Wins.
What happened
Illinois SB 3444 would grant AI companies that develop qualifying frontier models near-complete immunity from lawsuits when their systems cause harm, including mass casualty events. OpenAI has publicly backed the bill. Anthropic has publicly opposed it, calling it an extreme liability shield. The divide is not between AI insiders and regulators but between the two most prominent AI labs in the world, which have nominally shared a commitment to safety. The bill is moving through the Illinois Senate.
The two companies that have most loudly claimed AI safety is their core mission have taken directly opposite positions on whether AI companies should face legal consequences when their systems cause mass harm. One of them is lying about what safety means to them.
The Hidden Bet
OpenAI and Anthropic share a common safety philosophy and differ only on tactics
Their positions on SB 3444 are not a tactical disagreement. One company wants legal accountability for catastrophic AI failures built into the system; the other wants a statutory exemption. That is a values split, not a strategy difference.
Liability shields prevent reckless AI deployment by putting accountability on deployers rather than developers
If developers face no legal exposure for harms their models cause downstream, they have weaker incentives to invest in safety before deployment. Shifting liability to deployers only disciplines deployers. It does nothing to discipline the labs building the most capable and dangerous systems.
State-level AI regulation is a sideshow compared to federal action
Illinois is the third-largest state economy in the US and home to major financial and technology infrastructure. A precedent set in Illinois travels: other states copy the framework, lobbyists fight the same battle in parallel, and the cumulative effect shapes the national de facto standard before Congress acts.
The Real Disagreement
The genuine fork is between two visions of how AI accountability works. One view holds that legal liability creates market incentives for safety: if a model kills people and the developer faces ruinous litigation, labs will invest more in testing before release. The opposing view holds that liability exposure does not make AI safer; it just makes development slower and concentrates it in the hands of incumbents large enough to absorb lawsuits, leaving the field to OpenAI and a handful of peers. The first view requires accepting that some AI harms will be litigated, which is messy. The second requires trusting that labs will self-regulate without legal consequence, which is the very thing the last decade of social media has taught us not to trust. The lean is toward accountability: the history of product liability in pharmaceuticals, aviation, and automobiles suggests that litigation-driven incentives are not perfect but are real. OpenAI's position makes more sense if you believe you will be sued successfully and want protection. That is not a safety argument; it is a business argument.
What No One Is Saying
Anthropic's opposition to the bill is publicly framed as a principled safety stance. But Anthropic is also the company whose Mythos model just triggered emergency meetings at the Fed and the Bank of England for its cyberattack capabilities. Opposing a liability shield while building the most dangerous AI in the world is not a contradiction, but it is a tension nobody at Anthropic is addressing in public.
Who Pays
Victims of AI-caused mass harm events
Immediate if passed; effects visible at the first mass-harm event attributable to a qualifying model.
If SB 3444 passes and spreads to other states, people injured by AI-enabled events cannot sue the developer, only the deployer. Deployers are often smaller, less capitalized, and may dissolve or go bankrupt after a catastrophic event, leaving victims without meaningful recourse.
Smaller AI companies and startups
From the day the bill passes; litigation risk shapes investment and development decisions immediately.
The immunity provisions in SB 3444 apply to companies with qualifying frontier models, defined in ways that favor Anthropic and OpenAI. Startups building smaller or narrowly scoped models face full liability while incumbents are shielded, cementing the current hierarchy.
Scenarios
Illinois passes, other states copy
SB 3444 becomes law, OpenAI's framework wins, and liability shields spread to five or more states within 18 months, effectively creating a de facto national standard that preempts stricter federal regulation.
Signal Watch for similar bills introduced in Texas, New York, or California within 90 days of Illinois passage.
Anthropic wins the argument, bill stalls
SB 3444 fails or is significantly amended to remove the immunity provisions, Anthropic's accountability framework becomes the template for state and eventually federal AI legislation.
Signal A committee vote failure or a major amendment stripping the liability shield in the next 60 days.
Federal preemption ends the debate
Congress passes a federal AI framework within 12 months that supersedes state liability law entirely, rendering SB 3444 moot either way.
Signal A Senate AI committee vote advancing a federal preemption bill. This is the scenario both labs are quietly lobbying toward on their own terms.
What Would Change This
If evidence emerged that Anthropic privately supported a liability ceiling for its own models in federal lobbying while opposing SB 3444 publicly, the safety framing collapses and this becomes a pure competitive strategy story. Similarly, if the bill's definition of qualifying models actually excludes Anthropic's most capable systems, the company's opposition becomes straightforwardly self-interested rather than principled.