04-18-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Anthropic's enterprise model outpaces OpenAI

Saturday, April 18, 2026 · from 3 podcasts
  • Anthropic’s metered enterprise model drives 10x growth, eclipsing OpenAI’s flat-fee subscriptions.
  • AI labs now face a physical wall as populist backlash blocks essential data center construction.
  • AI leaders’ own doomsday rhetoric is fueling the real-world opposition now threatening their plans.

OpenAI is hitting a wall. While it chased consumer subscriptions, Anthropic quietly built a better business model, and the market is noticing.

On the `All-In` podcast, David Sacks laid out the numbers. OpenAI’s growth has slowed to a 3x annual rate, while Anthropic is on a 10x trajectory toward a potential $100 billion ARR. He attributes this to Anthropic’s metered “electricity model” for enterprise clients, which scales directly with usage. Secondary markets have reacted, valuing Anthropic higher than OpenAI for the first time.

This isn't just strategy; it's product. Anthropic’s Claude Managed Agents platform, detailed on `The AI Daily Brief`, handles the complex backend engineering for customers. It abstracts away the infrastructure challenges, letting companies deploy autonomous agents in days instead of months. This execution is winning over developers, with David Friedberg noting on `All-In` that Anthropic's rapid release cadence made its models dominant in his organization within six months.

But a superior business model now faces a physical bottleneck. Chamath Palihapitiya argued on `All-In` that frontier labs can no longer simply rent compute. To survive, they must own their own land and power to avoid being throttled by the hyperscalers who control the infrastructure.

Securing that power has become a ground war. As documented on `Hard Fork`, public opposition has escalated from zoning board fights to violence. A man threw a Molotov cocktail at Sam Altman’s home, and a local official in Indiana had his house shot at after a data center vote. The pushback is political, too, with Maine passing a moratorium on large data centers.

Hosts of `Hard Fork` argued the AI labs fueled this fire themselves. Kevin Roose and Casey Newton assert that years of existential risk rhetoric from executives like Altman radicalized public opinion. You cannot spend a decade warning the world about an apocalypse and then feign surprise when people treat you like you’re building the bomb.

David Sacks added a layer of irony, noting on `All-In` that Anthropic previously allied with the very AI safety groups who are now leading the charge against the data centers it needs to grow.

The race has fundamentally changed. The winner won't just have the best model, but the best strategy for navigating a world of power grids and populist anger.

Source Intelligence

- Deep dive into what was said in the episodes

OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell OutApr 17

  • New York City mayor Eric Adams is proposing a pied-à-terre tax of 3.9% annually on secondary homes valued over $5 million. David Sacks and Travis Kalanick argue the tax will crash demand for high-end real estate and stifle development by removing price-insensitive buyers.
  • David Sacks claims Austin demonstrates supply-side solutions to housing affordability, with rents declining for three consecutive years despite the city's population roughly doubling over the past decade. He argues Democratic cities and NIMBY policies prevent similar construction.
  • OpenAI and Anthropic both had roughly $30 billion in annual recurring revenue at the start of Q2, but Anthropic's growth rate is approximately 10x per year versus OpenAI's 3-4x. David Sacks argues this disparity could become insurmountable if OpenAI doesn't focus on enterprise coding.
  • Travis Kalanick states that in winner-take-all markets like AI, growth and scale create network effects around compute, token volume, and customer base. He argues that if Anthropic sustains a significantly faster growth rate than OpenAI at a similar size, it will win.
  • Chamath Palihapitiya argues frontier AI labs like OpenAI and Anthropic face a critical compute constraint. He cites a contested $6 billion data center project and a Maine bill banning all data centers as evidence of rising NIMBY opposition fueled by negative public sentiment toward AI.
  • David Sacks asserts that AI doomer groups have astroturfed opposition to data centers, shifting arguments from existential risk to local issues like water usage. He notes Anthropic allied with these groups, a strategy that may backfire as the company now needs to build its own compute infrastructure.
  • Chamath Palihapitiya claims hyperscalers control 60% of all compute, creating game theory where they could kneecap frontier AI labs by throttling access. He argues this forces labs to build their own infrastructure to avoid a 'Friendster effect' of being outcompeted due to poor performance.
  • Jason Calacanis argues AI-driven productivity gains are real but concentrated in startups and savvy teams, not yet translating to broad bottom-line results at large, complex enterprises where change management is a significant barrier.
Also from this episode: (7)

Media (1)

  • David Friedberg warns that public doxxing of wealthy individuals' homes, like Mayor Adams did with Ken Griffin's property, creates dangerous dog whistles. He cites the recent firebombing and shooting at Sam Altman's house as an example of real-world violence.

AI & Tech (2)

  • David Friedberg observes an unprecedented pace of innovation at Anthropic, with a rapid release cadence that has supplanted tools like Cursor and made its models dominant in his organization within six months.
  • Travis Kalanick states current AI agents are not AGI and lack taste or novel problem-solving ability, requiring heavy human-in-the-loop guidance. He confirms this from personal experience building investing agents that make basic logical errors.

Business (1)

  • Allbirds stock rose 450% in a week after pivoting from sneakers to AI, which the hosts cite as peak bubble behavior. The company sold its brand assets for $39 million after raising $350 million in its 2021 IPO.

Corruption (1)

  • David Friedberg recounts that multiple sources warned him of serious allegations against Congressman Eric Swalwell in December, which were then revealed in a coordinated manner months later. He finds it striking that this knowledge was held back for strategic political timing.

Markets (2)

  • David Sacks interprets the stock market's resilience during the Iran conflict as pricing in a near-term resolution, citing presidential statements that military objectives are almost achieved. The S&P 500 recovered all losses from the war's start by that Tuesday.
  • Chamath Palihapitiya notes market indicators like the Shiller PE and Buffett Index are at all-time highs, suggesting a risk-off posture. He sees dispersion where only a handful of stocks are driving gains and awaits major IPOs like SpaceX to deleverage.
Hard Fork
Hard Fork

Casey Newton

A.I. Backlash Turns Violent + Kara Swisher on Healthmaxxing + The Zuck Bot Is ComingApr 17

  • Public opposition to AI is turning violent, with a suspect arrested for throwing a Molotov cocktail at Sam Altman's house. He allegedly held anti-AI materials and a list of AI executives.
  • AI backlash is also manifesting as grassroots political resistance to data centers. Maine passed a moratorium on large data centers, and local referendums restricting them are spreading in states like Wisconsin, Ohio, and Indiana.
  • OpenAI lobbies against specific AI regulations while publicly advocating for governance. It killed a California transparency bill and backed an Illinois bill to limit its liability for model harms.
  • OpenAI's policy paper 'Industrial Policy for the Intelligence Age' proposes radical ideas like a public wealth fund for citizens and expanded safety nets, which contradicts its lobbying for smaller-government candidates.
  • Meta is building an AI avatar of Mark Zuckerberg trained on his mannerisms and strategic thinking to interact with employees. A separate 'CEO Agent' project gives Zuckerberg coding assistance.
Also from this episode: (6)

AI & Tech (4)

  • Polls show declining public trust in AI and its governance. A Stanford AI Index report found only 31% of Americans trust their government to responsibly regulate AI, compared to a 54% global average.
  • AI CEOs have historically escalated fears of existential risk from AI. Kevin argues their own rhetoric about superintelligence contributes to public anxiety more than critical journalism does.
  • Kevin and Casey identify the core public fear as economic: AI will take jobs and destabilize lives. They contrast Silicon Valley's enthusiasm for rapid change with a broader public desire for stability.
  • The AI boom is seen as a top-down, elitist project funded by a small group with capital and championed by figures like Donald Trump. This fuels resentment among those who feel they have no control.

Culture (1)

  • Kara Swisher explores Silicon Valley's longevity obsession in her CNN series but remains skeptical of biohacking fads like hyperbaric chambers and ketamine for optimization. She views the focus as narcissistic.

Health (1)

  • Swisher argues the most effective longevity intervention is universal healthcare, not fringe treatments. She notes U.S. healthcare costs $15,000 per person annually with worse outcomes than peer nations spending half that.

AI's Great DivergenceApr 16

  • Anthropic has restricted its 'Mythos' model to about 40 partners for limited cybersecurity testing, reflecting a trend of staggered rollouts due to security risks. OpenAI is pursuing a similar rollout strategy for its new model.
  • Meta's new Muse Spark is a natively multimodal reasoning model designed primarily for personal agents, not enterprise use. The model supports tool use, visual chain-of-thought, and multi-agent orchestration.
  • Z.ai leader Lu claims agents could do about 20 steps by the end of last year, but GLM 5.1 can now do 1,700. The model's autonomous work time is cited as a critical new performance curve.
  • Anthropic released Claude Managed Agents to close a notable gap between model capability and business application, as argued by head of product Angela Jiang. The platform bundles an agent harness with production infrastructure, aiming to reduce engineering overhead.
  • Claude Managed Agents enables scheduled, event-triggered, and long-horizon tasks. It abstracts self-hosting complexity, but lacks persistent memory across sessions, making it best suited for discrete, transactional operations.
  • Google introduced 'notebooks in Gemini', integrating Notebook LM's resource management directly into the app. Google's Josh Woodward positions this as building 'a second brain' beyond basic AI chatbot projects.
  • Ethan Mollick notes Muse Spark is fine but doesn't match the big three models, displaying some strange language and looseness with facts. François Chollet criticizes Meta for over-optimizing for benchmarks at the expense of actual usefulness.
  • Alexander Wang of Meta responded to criticism by saying the lab is open to feedback and is upfront about the model's weaknesses, such as low performance on the ARB GI 2 benchmark.
  • GLM 5.1 was trained entirely on less powerful Huawei chips, demonstrating China's hardware stack can produce powerful results. Its release two months after US leaders suggests the US lead over Chinese rivals is only a few months.
Also from this episode: (3)

Models (2)

  • On benchmarks, Muse Spark scored 52.4 on SweetBench Pro for coding, placing it near top models. It excels in visual comprehension, scoring a state-of-the-art 86.4 on CharViC's reasoning, beating Gemini 3.1 Pro by 6 points.
  • Z.ai's open source GLM 5.1, a 754B parameter model, outperforms leading Western models on coding benchmarks with a 58.4 SweetBench Pro score. The model demonstrates long-horizon task capability, completing an eight-hour autonomous Linux desktop build.

Agents (1)

  • Mark Zuckerberg positions Muse Spark for personal use areas like visual understanding, health, and social content. He frames it as a shift from assistant AI to agentic AI, enabling it to 'do things for you' like creating mini-games or troubleshooting appliances.