03-15-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI’s Physical War: Compute Is the Bottleneck

Sunday, March 15, 2026 · from 2 podcasts
  • Big Tech is spending $600 billion on infrastructure now to lock down AI compute capacity through 2029.
  • Labs that locked in deals early, like OpenAI, have a decisive advantage, while others now chase last-minute capacity at huge premiums.
  • The strategic gap between early movers and the fiscally conservative is widening into an insurmountable moat.

The AI race is no longer about just code. It’s a brutal fight for power, chips, and real estate.

On the Dwarkesh Podcast, Dylan Patel detailed the physical infrastructure war. Big Tech’s colossal $600 billion capital expenditure is funding turbine deposits for 2028 and data center construction for 2027. This isn’t just spending for this quarter. It’s a multi-year bet on securing the physical capacity to run AI at scale.

The AI model companies are caught in the middle of this pre-committed war. Patel explained that Anthropic’s explosive revenue growth now demands roughly $40 billion in annual compute spend. To meet that, it needs about four gigawatts of new inference capacity this year alone. But the conservative financial strategy that once protected Anthropic has become its biggest liability.

OpenAI took the opposite path. It signed massive, aggressive deals with cloud providers early, even amid criticism that it couldn’t pay. That move locked in capacity at better terms and lower prices. Patel noted that labs scrambling now are paying premiums, like $2.40 per H100 hour, far above the $1.40 build cost. They are forced to turn to lower-quality providers they once avoided.

This compute bottleneck is reshaping the competitive landscape. First-mover advantage in securing physical resources is proving more decisive than model quality or revenue growth. The companies that bet big on hardware years ago are pulling away from those who prioritized financial caution.

Meanwhile, the public narrative is shifting to obscure the stakes. On Podcasting 2.0, Adam Curry and Dave Jones dissected Sam Altman’s recent interview, where he said the term ‘Artificial General Intelligence’ has ‘ceased to have much meaning.’ The discussion pivoted to a vague metric about data center cognitive capacity. The business model, as Curry recounted Altman describing it, is simpler: get users hooked, then raise prices.

The boardroom mystique about AGI masks a grimmer reality. The real intelligence on display is in securing the physical means of production before your competitors even know they need it.

Dylan Patel, Dwarkesh Podcast:

- In some sense, a lot of the financial freakouts in the second half of last year were because, "OpenAI signed all these deals but they didn't have the money to pay for them…"

- Anthropic was a lot more conservative. They were like, "We'll sign contracts, but we'll be principled."

Entities Mentioned

AnthropicCompany
OpenAItrending

Source Intelligence

What each podcast actually said

Episode 253: Dirty FixMar 13

  • OpenAI CEO Sam Altman now claims the term 'Artificial General Intelligence' has 'ceased to have much meaning,' which Dave Jones and Adam Curry frame as a retreat from concrete promises to vague corporate mysticism.
  • Altman proposed a new, fuzzy metric for AGI based on when data centers might contain more cognitive capacity than the world, and estimated this could happen by late 2028, with 'huge error bars'.
  • According to Dave Jones, Sam Altman outlined the explicit AI model business model as getting developers hooked on a tool, charging an initial $200 per month, then dramatically raising prices to $4,000 or $5,000 per month.
  • Jones describes the model as pure platform lock-in driven by addiction, not by revolutionary intelligence, comparing it to treating users like commodities.
  • Dave Jones described his experiments with local AI tooling and open-source agents as a 'big pile of stinking bullcrap,' a scam ecosystem propped up by influencers selling pre-configured servers.
  • After building a local AI setup and writing his own scripts, Jones concluded there was a lack of meaningful tasks for the system to perform, highlighting the gap between corporate hype and broken developer toolchains.

Also from this episode:

Models (1)
  • Jones criticized 'obliterated' models, which are attempts to remove censorship guardrails from others' work, and found local AI agents to be all chat with no practical utility.

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI computeMar 13

  • Dylan Patel of SemiAnalysis explains that the $600 billion in AI-related capital expenditure forecasted for 2024 is not for immediate use, but funds multi-year infrastructure like power capacity for 2028 and data center construction for 2027.
  • Anthropic's explosive revenue growth now requires it to find roughly $40 billion in annual compute spend, which translates to needing about four gigawatts of new inference capacity this year alone.
  • Patel says OpenAI secured a decisive first-mover advantage by signing aggressive, massive deals with cloud providers early, locking in compute capacity at cheaper rates and better terms despite skepticism about its ability to pay.
  • Anthropic's initially conservative financial strategy, which prioritized avoiding bankruptcy risk, has left it exposed, forcing it to chase last-minute compute deals in a tight market.
  • In the current scramble for AI chips, labs are paying significant premiums, such as $2.40 per hour for an Nvidia H100, a markup over the estimated $1.40 build cost.
  • The core strategic divergence is that OpenAI's early, aggressive bets gave it an advantage in a physical resource war, while Anthropic's later revenue success forces it into a costly scramble for a depreciating asset.

Also from this episode:

Models (1)
  • To secure necessary compute, AI labs like Anthropic are now forced to turn to lower-quality or newer infrastructure providers they had previously avoided.