AI's biggest bottleneck isn't intelligence. It's electricity.
Dylan Patel of SemiAnalysis told the Dwarkesh Podcast that Big Tech's $600 billion capital expenditure forecast is a multi-year bet on physical infrastructure. Companies are placing deposits now for power turbines that won't be delivered until 2028. This is a long-lead game of securing megawatts and square footage years before they're needed.
The labs are caught in the present. Anthropic's revenue growth now demands roughly $40 billion in annual compute spend, requiring about four gigawatts of new inference capacity this year alone. According to Patel, OpenAI's early, aggressive deals with cloud providers gave it a decisive advantage. It locked in capacity at favorable terms while critics questioned its ability to pay.
Anthropic prioritized financial responsibility. That left it hunting for spare chips in a market where the price for an H100 hour has jumped to $2.40, well above the $1.40 build cost. It must now rely on lower-quality or newer providers it previously avoided.
The resource war is global. On the All-In podcast, David Sacks argued that the energy demands of AI are hitting a grid already strained by the war in Ukraine and the shift to electric vehicles. The result is a physical shortage that money alone can't fix.
This is forcing extreme ideas. Sacks mentioned proposals to build data centers in space, where solar power is constant and cooling is free. It sounds like science fiction, but the logic is simple: the earthbound grid can't scale fast enough.
Dylan Patel, Dwarkesh Podcast:
- A huge portion of that $600 billion is for long-lead items like turbine deposits for power capacity in 2028 and data center construction for 2027.
- It's a multi-year bet on scaling.


