Zero Federal AI Laws, 1,561 State Bills, and a White House Framework Nobody Has to Follow
What happened
As of April 2026, the United States has zero binding federal AI laws. The White House released a voluntary AI framework with no enforcement mechanism. Meanwhile, the EU AI Act is actively being enforced across 27 countries, and 45 US states have introduced a combined 1,561 AI-related bills. Congress has not passed a single AI-specific statute.
The White House released an AI framework with no legal force, Congress has passed zero AI laws, the EU is actively enforcing its Act, and 45 states are writing their own rules. the US is not winning the AI governance race by moving fast, it is losing it by failing to move at all.
The Hidden Bet
Federal preemption of state AI laws would be good for innovation because it creates a uniform national standard that companies can build against.
A uniform federal standard is only better than a state patchwork if the federal standard is actually effective. A weak or unenforced federal framework that preempts stronger state laws produces the worst of both worlds: states cannot protect their residents, and the federal standard does nothing. The 2025 attempt to ban state AI laws for 10 years failed 99-1 in the Senate partly because legislators understood this.
Congressional inaction on AI is a failure of process. if Congress could just get organized, it would pass something reasonable.
Congressional inaction on AI may be a rational response to the industry's lobbying position, which is split. Large incumbents (Microsoft, Google, OpenAI) favor federal preemption because they can manage one federal standard; startups and open-source developers fear that any federal law will entrench the incumbents. Congress has no coalition to pass anything because the industry has no unified ask.
The EU's AI Act is the benchmark the US should be measured against. active enforcement across 27 countries represents effective governance.
EU enforcement of the AI Act is so far almost entirely procedural. companies are registering systems, setting up governance structures, and filing documentation. There have been few actual enforcement actions against deployed AI harms. The EU may have traded speed of legislation for depth of impact, and it is too early to know whether the Act's requirements will change AI behavior in practice.
The Real Disagreement
The genuine fork is whether the right model for AI governance is top-down federal uniformity. one standard, one regulator, preempt the states. or bottom-up regulatory experimentation, where states try different approaches and the federal government eventually consolidates what works. Both positions have real merit. Uniformity reduces compliance burden and prevents regulatory arbitrage where companies relocate to the most permissive state. Experimentation produces learning. California's AI transparency requirements and Colorado's algorithmic accountability rules are generating real-world data about what works. The honest answer is that the US is doing neither: it has regulatory experimentation without any consolidation mechanism, which means the experimentation produces knowledge that nobody is synthesizing. I lean toward the view that a weak federal floor plus preserved state authority to go further is better than either extreme. but Congress would have to actually pass something, which so far it has shown no ability to do.
What No One Is Saying
David Sacks, who co-authored the White House's AI framework, had his advisory term expire a few days after the framework was released. The framework that is now being used to justify federal preemption of state AI laws was written by someone who no longer officially works for the government, has not been confirmed by the Senate, and whose financial interests in AI companies were never fully disclosed. The document's authority rests entirely on the president's willingness to act on it. and Congress is under no obligation to do so.
Who Pays
Small AI developers and startups
Immediate and accelerating through 2026
1,561 state bills means 1,561 potential compliance requirements across different jurisdictions. Each state that passes its own definition of 'high-risk AI' creates a legal obligation that large companies can manage with compliance teams and small companies cannot. The patchwork is regressive. it costs more per dollar of revenue for small players.
Residents of states with no AI law
Slow-burn; visible only after specific harms occur and victims find they have no legal path
If federal preemption succeeds and the federal framework remains nonbinding or weak, residents of states that have not passed their own AI protections. and that the federal standard would now prohibit from doing so. have no recourse for AI-driven harms in hiring, lending, healthcare, and criminal justice.
US AI companies competing internationally
Medium-term; 12-36 months
EU companies building to the AI Act have compliance certainty. US companies building for the US market face a moving target. a patchwork of state rules today, possible federal preemption next year, potentially replaced by something else the year after. Investment in compliance infrastructure is risky, so some companies delay it, which widens the gap with EU competitors who have clarity.
Scenarios
Federal Floor Passes, States Keep Upside
Congress passes a narrow federal AI bill. covering only high-risk sectors like hiring, credit, and healthcare. that sets minimum requirements but explicitly preserves states' ability to enact stronger protections. The patchwork narrows but does not disappear.
Signal A bipartisan Senate bill with co-sponsors from states that have already passed AI laws is introduced before the end of Q2 2026
Preemption Wins, Framework Stays Weak
The White House framework is codified into legislation that preempts state AI laws but contains no enforcement mechanism and no new regulatory body. Large incumbents win; state-level consumer and worker protections are invalidated; nothing materially changes in how AI systems are deployed.
Signal A federal AI bill is introduced that explicitly preempts state laws while delegating oversight to existing agencies without new rulemaking authority
States Dig In, Federal Effort Collapses
Congress fails to pass anything; the White House framework generates legal challenges from blue-state AGs arguing that executive preemption of state law requires congressional authorization; the regulatory patchwork deepens into genuine fragmentation that starts visibly hurting US AI market development.
Signal California or Colorado AG sues to block executive AI preemption orders within 60 days of framework implementation
What Would Change This
If Congress passes a binding federal AI law with actual enforcement authority and teeth. not a framework, not recommendations, not a voluntary standard. then the bottom line changes. A credible federal law with real enforcement would be a genuine governance development. So far there is no evidence Congress is capable of producing one.