← April 11, 2026
tech power

The Company That Beat OpenAI Is Now Afraid of Nvidia

The Company That Beat OpenAI Is Now Afraid of Nvidia
Reuters

What happened

Anthropic's annualized revenue run rate for Claude has surpassed $30 billion, crossing OpenAI's $25 billion and giving Anthropic the higher private market valuation of the two. On April 9, Reuters reported that Anthropic is in early-stage exploration of designing its own custom AI chips, responding to the same shortage that has pushed OpenAI, Meta, and Google to explore in-house silicon. Anthropic's current compute runs on a mix of Google TPUs, Amazon custom chips, and Nvidia GPUs, and it has signed a deal for 3.5 GW of TPU capacity from Broadcom beginning in 2027, conditional on continued commercial growth.

Anthropic won the revenue race by being the most reliable enterprise AI provider. Now it faces the same vulnerability that almost killed the enterprise software industry in the 1980s: its most critical input is controlled by suppliers who are also its competitors.

The Hidden Bet

1

Custom chips would give Anthropic independence

Custom chip design takes 3-5 years and billions of dollars before first silicon. During that window, Anthropic would still depend on Nvidia, Google, and Amazon for inference and training. Apple's A-series chips succeeded because Apple had $60 billion in free cash flow to sustain the program. Anthropic is still venture-backed.

2

Beating OpenAI in revenue means winning the AI race

OpenAI has the consumer brand, the API ecosystem, and the Microsoft distribution deal. Anthropic's lead is almost entirely in enterprise B2B contracts. If OpenAI fixes its reliability issues, its customer acquisition pipeline could close the gap faster than Anthropic's revenue lead suggests.

3

The Broadcom TPU deal secures Anthropic's compute future

The 3.5 GW deal starts in 2027 and is contingent on commercial success. A regulatory ruling, a competitor breakthrough, or a market downturn between now and 2027 could trigger the contingency. Anthropic's compute future depends on commercial performance that cannot be guaranteed.

The Real Disagreement

The real fork is whether AI is a software business or a hardware business. If it is software, Anthropic's engineering talent and model quality determine the winner, and custom chips are a distraction. If it is hardware, the company that controls compute capacity controls the market, and every dollar spent on model research is wasted if you cannot get the training runs. OpenAI bet on hardware by partnering with Microsoft's Azure and investing in its own infrastructure. Anthropic bet on model quality. The revenue numbers suggest Anthropic's bet is paying off for now. But a chip shortage that lasts another two years could reverse the advantage. The honest answer is that Anthropic has to be both a software company and a hardware company, and no one has yet demonstrated you can be both.

What No One Is Saying

Anthropic surpassing OpenAI in revenue while staying dependent on Google and Amazon compute means Amazon and Google can, at any time, adjust pricing or allocation terms and immediately compress Anthropic's margins. Anthropic's $380 billion valuation assumes it can sustain margins that require goodwill from the two companies that are simultaneously its infrastructure providers and its direct competitors.

Who Pays

Enterprise customers locked into Claude

Medium-term, particularly if the Broadcom deal terms slip or GPU shortage worsens

If Anthropic's compute constraints force capacity rationing or price increases, enterprise customers who built workflows on Claude have no fast alternative. Migration costs are high and switching timelines are months.

OpenAI employees and investors

Immediate, ongoing

Anthropic's revenue lead removes OpenAI's claim that reliability issues do not matter in the long run. Internal pressure to cut costs or restructure increases when a competitor has already proven the enterprise model.

Scenarios

Software wins

Anthropic's model quality and enterprise reliability sustain the revenue lead. Chip supply stabilizes. Anthropic reaches $50B ARR before needing custom silicon and raises enough capital to fund a chip program from profit.

Signal Anthropic's revenue growth continues at current pace through Q4 2026 with no public statements about compute constraints.

Hardware squeeze

Global AI chip demand continues to exceed supply. Anthropic's TPU allocation from Google gets reprioritized for Google's own models. Anthropic cannot fulfill enterprise SLAs. Revenue growth stalls and the valuation premium collapses.

Signal Anthropic begins hiring hardware engineers and announces a chip design program, signaling that the threat is real enough to spend on.

Consolidation

Amazon or Google moves to acquire Anthropic before an IPO, converting its compute dependency into full ownership. Antitrust review delays but does not block the deal.

Signal A 13D filing from Amazon or Google showing a position above 25% in Anthropic, or a formal acquisition announcement.

What Would Change This

If Anthropic publicly announced a dedicated chip design team with a committed timeline, the hardware threat theory would become primary. If OpenAI closes the revenue gap to within 10% by Q3 2026, the enterprise model lead is narrower than the current numbers suggest.

Prediction Markets

Prices as of 2026-04-11 — the analysis was written against these odds

Sources

Reuters — Broke the story that Anthropic is in early-stage exploration of designing custom chips. Three sources. No team formed, no commitment made. Anthropic currently uses Google TPUs, Amazon chips, and Nvidia GPUs.
BigGo Finance — Reports Anthropic's ARR has crossed $30 billion, exceeding OpenAI's $25 billion, and that Anthropic's private valuation has also overtaken OpenAI's. Frames this as a capital sentiment shift.
CNBC — Contextualizes the chip exploration against the broader shortage. Notes Meta and OpenAI are doing the same. Frames chip dependency as the sector's shared existential risk.
News Factory — Adds the Broadcom SEC filing detail: Anthropic's 3.5 GW TPU deal starting in 2027 is contingent on continued commercial success, meaning the compute contract is performance-conditional.

Related