03-27-2026Price:

The Frontier

Your signal. Your price.

BUSINESS

AI's growth hits a wall of concrete and copper

Friday, March 27, 2026 · from 4 podcasts
  • The AI boom is hitting physical limits, shifting investor focus from software to infrastructure.
  • Elon Musk is betting billions on his own chip factory to bypass supply chain bottlenecks.
  • Political backlash and software efficiencies are new wildcards in the race to build.

The AI revolution is no longer just about code. It's about power grids, transformers, and data centers.

According to financial expert Jordy Visser on the *Bitcoin And* podcast, AI has hit its “physical limits.” This is forcing a massive investment pivot from software-centric companies to the raw materials and energy infrastructure that power them. The era of easy growth fueled by algorithms is giving way to a capital-intensive buildout of the physical world.

Elon Musk is taking this to its logical extreme. Convinced the legacy semiconductor industry is too cautious, he plans to build a “Terafab” - a single facility the size of three Central Parks to vertically integrate chip production. Brett Winton of ARK Invest, speaking on *FYI*, explained that Musk sees access to chips as the primary bottleneck for building galaxy-spanning intelligence. The project is a high-stakes move to force the entire supply chain to expand.

Brett Winton, FYI - For Your Innovation:

- Access to chips is his anticipated choke point because he believes he can launch terawatts of energy into space.

- He just needs terawatts of chips to accompany that energy to train and infer massively intelligent AI models.

Just as this industrial mobilization begins, political resistance is mounting. On *The AI Daily Brief*, Nathaniel Whittemore detailed a bill from Bernie Sanders and Alexandria Ocasio-Cortez that would pause all new data center construction in the U.S. The proposal recasts the infrastructure buildout from a technical issue into a populist one, creating a new layer of uncertainty for investors and builders.

Meanwhile, the people actually building the infrastructure are signing long-term deals. Michael Intrator, CEO of cloud provider CoreWeave, dismissed fears of rapid GPU obsolescence on the *All-In* podcast. He called the argument “nonsense” pushed by short-sellers, noting his clients sign five-year contracts and that prices for older A100 chips are actually appreciating. For Intrator, the sustained demand for inference - the practical application of AI models - proves this is a long-term capital cycle.

Michael Intrator, All-In:

- My take on the GPU depreciation bait is that it's nonsense.

- It's a debate that is being brought to the forefront by some traders that have a short position in the stock and they're trying to talk down.

Some companies are trying to engineer their way around the problem. Google's “TurboQuant” algorithm and Apple’s strategy of “distilling” large models onto iPhones could reduce reliance on massive, centralized data centers. But these efficiencies are running against an explosion in demand.

The future of AI will be decided not just by better algorithms, but by who can secure the power and industrial capacity to run them.

Entities Mentioned

AnthropicCompany
CoreWeaveCompany
GeminiProduct
OpenAItrending
TeraFabProduct
TSMCCompany

Source Intelligence

What each podcast actually said

Why AI Needs Better BenchmarksMar 26

  • Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced legislation calling for a moratorium on all U.S. data center construction.
  • The proposed data center moratorium would last until national standards for labor, environmental, and civil rights safeguards are established.
  • Senator Mark Warner calls the moratorium idea ridiculous, arguing it would only allow China to accelerate its own AI infrastructure.
  • Mark Warner predicts AI-driven economic disruption could push unemployment for recent college graduates to 35% by 2028.
  • Google's new 'TurboQuant' algorithm compresses model context to address the 'memory wall,' claiming an 8x speed boost for AI inference.
  • Apple is using distillation to train smaller, proprietary models for the iPhone based on the reasoning traces of Google's large Gemini models.
  • China blocked the co-founders of AI company Manus from leaving the country while reviewing Meta's $2 billion acquisition offer.
  • Chinese regulators view the loss of domestic AI talent to Western companies as 'selling young crops,' signaling a talent crackdown.

Also from this episode:

Models (3)
  • Google claims TurboQuant can reduce AI inference costs by 50% through efficient model compression with almost zero performance loss.
  • Cloudflare CEO Matthew Prince likened Google's breakthrough to 'Google's Deepseek,' highlighting optimization for speed, memory, and power.
  • Apple's goal with on-device AI is to keep user data local and bypass cloud latency, setting a standard for edge computing.

Terafab: Elon’s Plan To Dominate Semiconductors | The Brainstorm EP 124Mar 26

  • Elon Musk sees civilization resting on three pillars: solar, space launch, and semiconductor chips.
  • Musk views the global semiconductor industry as broken due to legacy manufacturers scaling too cautiously.
  • According to Brett Winton, Musk's expected choke point is chip access, not energy, as he can launch terawatts into space.
  • Musk's goal is terawatts of compute to train AI models and power humanoid robots, not to protect industry margins.
  • Musk's reported $20 billion 'Terafab' would be a single building the size of three Central Parks housing every production step.
  • Brett Winton says the 'Terafab' facility's ambition and scale exceed anything in human history.
  • The 'Terafab' project requires 10 gigawatts of power, with the $20 billion price tag representing just the 'shovel in the ground' cost.
  • By committing massive capital to vertical chip integration, Musk pressures the entire supply chain to ramp up capacity.
  • Musk's move forces legacy manufacturers like TSMC to expand or risk becoming subscale compared to his conglomerate.
  • The strategy carries 'Grok risk': if Musk unlocks a chip supply glut, rivals like OpenAI and Anthropic could benefit more.
  • Sam Korus notes that OpenAI and Anthropic currently have the massive demand that could use any new supply.
  • Brett Winton argues Musk isn't afraid of subsidizing rivals; his goal is populating galaxies, not a 10% shareholder return.
  • For Musk, the risk of a chip supply glut is a small price for ensuring the compute he needs for AI actually exists.

Also from this episode:

Models (1)
  • Sam Korus argues Musk is wagering on infinite demand for intelligence and is far more risk-tolerant than his peers.

AI Agency | Bitcoin NewsMar 24

  • Economist Jordy Visser argues high GDP growth with zero net job creation and low inflation signals a fundamental fracture in traditional capitalism, driven by artificial intelligence.
  • Conventional economic models failed to predict that tariffs would not raise consumer prices, as Chinese manufacturers absorbed costs, a miscalculation Visser attributes to ignoring AI's deflationary force.
  • Elon Musk suggests AI could push global GDP growth to 10%, a figure Visser finds plausible given AI's rapid productivity gains and labor displacement.
  • Jordy Visser contends official GDP calculations likely understate AI's true impact because they fail to fully capture intangible productivity contributions.
  • Visser notes a shrinking US trade deficit is keeping capital within the country, potentially ending a long-standing cycle where foreign entities funded growth by buying US securities.
  • Citing Elon Musk's warning that AI has hit 'physical limits', Jordy Visser sees investor focus shifting from software to physical infrastructure like data centers, transformers, and energy grids.

Also from this episode:

AI & Tech (1)
  • AI is leveling the global competitive field for corporate profit margins, Visser argues, allowing bloated European firms to improve dramatically while lean US tech giants see less relative gain.

Four CEOs on the Future of AI: CoreWeave, Perplexity, Mistral, and IRENMar 23

  • CoreWeave CEO Michael Intrator told the All-In podcast that AI compute demand is shifting decisively from model training to inference, which he calls the 'monetization of the investment' where commercial value is realized.
  • Intrator built CoreWeave by first renting GPU cycles for crypto mining and rendering, treating compute as a flexible asset, before pivoting the infrastructure to AI.
  • Michael Intrator dismissed arguments about rapid GPU obsolescence as 'nonsense' driven by short-sellers, noting CoreWeave's average customer contract is five years and the firm uses a six-year depreciation schedule for its hardware.
  • Intrator cited appreciating prices for Nvidia's A100 chips as proof of enduring demand, arguing new market entrants blocked from buying the latest models create a secondary market for older hardware.
  • CoreWeave's strategy is to operate in a layer 'above the Nvidia GPUs but below the models,' delivering specialized AI compute, while hyperscalers like AWS handle general-purpose workloads.
  • Intrator claims CoreWeave's lead comes from being first to deploy each new Nvidia architecture at commercial scale, from H100s to the forthcoming GB300s.
  • The CoreWeave CEO framed the hardware lifecycle as bleeding-edge chips training new models, which then cycle down into long-term inference use, a trend he says is validated by customer contracts and pricing.

Also from this episode:

Models (1)
  • To learn AI infrastructure, CoreWeave purchased Nvidia A100 GPUs and donated them to an open-source research project, which Intrator called paying 'tuition'; when the researchers returned to enterprise jobs, they demanded the same setup, becoming CoreWeave's first customers.