AI's revenue explosion is colliding with a physics problem: compute.
On the Dwarkesh Podcast, Dylan Patel of SemiAnalysis outlined the strategic gap. Big Tech's $600 billion capex is a long-term infrastructure bet, funding power turbines for 2028 and data centers for 2027. AI labs like Anthropic need chips today. Its growth now demands roughly $40 billion in annual compute spend, requiring it to chase spare capacity in a tight market at premium prices.
The divergence is tactical. Patel explained OpenAI signed massive cloud deals early, locking in favorable terms even when it seemed financially reckless. Anthropic prioritized fiscal prudence, avoiding bankruptcy risk. That caution backfired, forcing it into a costly scramble for a depreciating asset.
The compute bottleneck is part of a broader fight for control over the entire technical stack. On This Week in Startups, the founders of Hippius argued centralization creates systemic fragility, pitching their decentralized storage subnet as a cheaper, resilient alternative to Amazon S3. The Presidio Bitcoin Jam noted similar tensions in open-source AI, where training data, compute, and distribution remain bottlenecked by a few entities.
The underlying race is the same: who controls the infrastructure that powers the next cycle of innovation. It's a battle of capital, contracts, and architecture.
Dylan Patel, Dwarkesh Podcast:
- In some sense, a lot of the financial freakouts in the second half of last year were because, 'OpenAI signed all these deals but they didn't have the money to pay for them…'
- Anthropic was a lot more conservative. They were like, 'We'll sign contracts, but we'll be principled.'


