04-18-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Altman home attacked amid AI backlash

Saturday, April 18, 2026 · from 2 podcasts
  • Violence against AI leaders is no longer theoretical - Molotov attacks and gunfire mark a dangerous escalation.
  • OpenAI’s lobbying clashes with its public stance on AI safety, fueling public distrust.
  • Compute scarcity and NIMBYism are forcing AI labs to build their own power and infrastructure.

Altman’s house burned. Not metaphorically - literally. A suspect was arrested after throwing a Molotov cocktail at his San Francisco home, carrying a list of other AI executives. In Indiana, a councilman’s home was shot up following a vote to approve a data center. This isn’t protest. It’s war.

The violence didn’t come from nowhere. OpenAI spent years selling AI as an existential threat - Sam Altman himself called it a potential 'extinction-level event.' Now, when public anxiety spikes, he blames journalists. Kevin Roose and Casey Newton call that disingenuous. The rhetoric came from the top. When leaders stoke fear for influence, they can’t feign surprise when it backfires.

David Friedberg on All-In warned that doxxing wealthy homes - like Mayor Adams exposing Ken Griffin’s property - turns addresses into targets. The Altman attack confirms it: anti-AI sentiment has crossed into real-world violence. And it’s not just elites. Grassroots resistance is spreading. Maine banned new data centers. Wisconsin, Ohio, and Indiana have local referendums. The fight isn’t just about AI - it’s about who controls it.

"You cannot spend a decade warning the world about a potential apocalypse and then feign surprise when people treat you like the person building the bomb."

- Kevin Roose, Hard Fork

The anger isn’t just philosophical. It’s economic. Polls show only 31% of Americans trust the government to regulate AI - far below the 54% global average. People see AI as a top-down project, driven by billionaires with no accountability. OpenAI lobbies against transparency laws while pushing radical safety policies abroad. It killed a California bill requiring model disclosures and backed an Illinois bill limiting liability. The hypocrisy is visible.

Meanwhile, the infrastructure wall is closing in. Chamath Palihapitiya argues frontier labs can’t rely on Amazon or Google for compute - those hyperscalers control 60% of capacity and could throttle access at will. Over 40% of contested data center builds are now canceled. Maine banned them outright. The era of renting scale is over.

"When you replace a leader with a chatbot, you don't get efficiency - you get a system that can be gamed."

- Casey Newton, Hard Fork

Enterprises are shifting fast. Anthropic’s metered 'electricity model' for coding tokens is outpacing OpenAI’s flat $20 subscriptions. David Sacks notes Anthropic could hit $100 billion in ARR by year-end - a number that defies current valuations. Secondary markets now price Anthropic above OpenAI. The winner may not be the smartest model, but the one that can deliver compute without political suicide.

The AI elite’s response? More insulation. Zuckerberg’s building an AI clone to avoid employee questions. Altman’s healthmaxxing with hyperbaric chambers. But as Swisher found, these stunts only deepen the divide. The public doesn’t want immortality hacks - they want jobs, stability, and a say. The backlash isn’t just about AI. It’s about power.

Source Intelligence

- Deep dive into what was said in the episodes

OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell OutApr 17

  • New York City mayor Eric Adams is proposing a pied-à-terre tax of 3.9% annually on secondary homes valued over $5 million. David Sacks and Travis Kalanick argue the tax will crash demand for high-end real estate and stifle development by removing price-insensitive buyers.
  • David Sacks claims Austin demonstrates supply-side solutions to housing affordability, with rents declining for three consecutive years despite the city's population roughly doubling over the past decade. He argues Democratic cities and NIMBY policies prevent similar construction.
  • Travis Kalanick states that in winner-take-all markets like AI, growth and scale create network effects around compute, token volume, and customer base. He argues that if Anthropic sustains a significantly faster growth rate than OpenAI at a similar size, it will win.
  • Chamath Palihapitiya argues frontier AI labs like OpenAI and Anthropic face a critical compute constraint. He cites a contested $6 billion data center project and a Maine bill banning all data centers as evidence of rising NIMBY opposition fueled by negative public sentiment toward AI.
  • David Sacks asserts that AI doomer groups have astroturfed opposition to data centers, shifting arguments from existential risk to local issues like water usage. He notes Anthropic allied with these groups, a strategy that may backfire as the company now needs to build its own compute infrastructure.
Also from this episode: (8)

Media (1)

  • David Friedberg warns that public doxxing of wealthy individuals' homes, like Mayor Adams did with Ken Griffin's property, creates dangerous dog whistles. He cites the recent firebombing and shooting at Sam Altman's house as an example of real-world violence.

Startups (1)

  • OpenAI and Anthropic both had roughly $30 billion in annual recurring revenue at the start of Q2, but Anthropic's growth rate is approximately 10x per year versus OpenAI's 3-4x. David Sacks argues this disparity could become insurmountable if OpenAI doesn't focus on enterprise coding.

AI & Tech (4)

  • David Friedberg observes an unprecedented pace of innovation at Anthropic, with a rapid release cadence that has supplanted tools like Cursor and made its models dominant in his organization within six months.
  • Chamath Palihapitiya claims hyperscalers control 60% of all compute, creating game theory where they could kneecap frontier AI labs by throttling access. He argues this forces labs to build their own infrastructure to avoid a 'Friendster effect' of being outcompeted due to poor performance.
  • Jason Calacanis argues AI-driven productivity gains are real but concentrated in startups and savvy teams, not yet translating to broad bottom-line results at large, complex enterprises where change management is a significant barrier.
  • Travis Kalanick states current AI agents are not AGI and lack taste or novel problem-solving ability, requiring heavy human-in-the-loop guidance. He confirms this from personal experience building investing agents that make basic logical errors.

Business (1)

  • Allbirds stock rose 450% in a week after pivoting from sneakers to AI, which the hosts cite as peak bubble behavior. The company sold its brand assets for $39 million after raising $350 million in its 2021 IPO.

Corruption (1)

  • David Friedberg recounts that multiple sources warned him of serious allegations against Congressman Eric Swalwell in December, which were then revealed in a coordinated manner months later. He finds it striking that this knowledge was held back for strategic political timing.
Hard Fork
Hard Fork

Casey Newton

A.I. Backlash Turns Violent + Kara Swisher on Healthmaxxing + The Zuck Bot Is ComingApr 17

  • Public opposition to AI is turning violent, with a suspect arrested for throwing a Molotov cocktail at Sam Altman's house. He allegedly held anti-AI materials and a list of AI executives.
  • Polls show declining public trust in AI and its governance. A Stanford AI Index report found only 31% of Americans trust their government to responsibly regulate AI, compared to a 54% global average.
  • AI CEOs have historically escalated fears of existential risk from AI. Kevin argues their own rhetoric about superintelligence contributes to public anxiety more than critical journalism does.
  • OpenAI lobbies against specific AI regulations while publicly advocating for governance. It killed a California transparency bill and backed an Illinois bill to limit its liability for model harms.
  • OpenAI's policy paper 'Industrial Policy for the Intelligence Age' proposes radical ideas like a public wealth fund for citizens and expanded safety nets, which contradicts its lobbying for smaller-government candidates.
Also from this episode: (6)

AI & Tech (4)

  • AI backlash is also manifesting as grassroots political resistance to data centers. Maine passed a moratorium on large data centers, and local referendums restricting them are spreading in states like Wisconsin, Ohio, and Indiana.
  • Kevin and Casey identify the core public fear as economic: AI will take jobs and destabilize lives. They contrast Silicon Valley's enthusiasm for rapid change with a broader public desire for stability.
  • The AI boom is seen as a top-down, elitist project funded by a small group with capital and championed by figures like Donald Trump. This fuels resentment among those who feel they have no control.
  • Meta is building an AI avatar of Mark Zuckerberg trained on his mannerisms and strategic thinking to interact with employees. A separate 'CEO Agent' project gives Zuckerberg coding assistance.

Culture (1)

  • Kara Swisher explores Silicon Valley's longevity obsession in her CNN series but remains skeptical of biohacking fads like hyperbaric chambers and ketamine for optimization. She views the focus as narcissistic.

Health (1)

  • Swisher argues the most effective longevity intervention is universal healthcare, not fringe treatments. She notes U.S. healthcare costs $15,000 per person annually with worse outcomes than peer nations spending half that.