04-25-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Mythos leak exposes AI's security illusion

Saturday, April 25, 2026 · from 3 podcasts
  • Anthropic’s Mythos AI, designed to find critical software flaws, leaked to hackers via a third-party vendor.
  • Governments compare the threat to a Strait of Hormuz blockade, but no agency regulates such models.
  • The breach undermines Anthropic’s 'safety-first' branding, giving rivals like Sam Altman ammunition.

Anthropic’s Mythos model was never supposed to leave the vault. Engineered to detect deep software vulnerabilities - like the 27-year-old OpenBSD bug it recently uncovered - it was shared with only 11 trusted entities: major banks, tech giants, and the UK government. Six weeks after Dario Amodei met White House staff to discuss AI governance, that containment failed. A hacker collective on Discord accessed the model through a compromised third-party vendor, exposing critical infrastructure to automated cyberattacks.

The fallout is immediate and global. Canada’s finance minister likened the leak to a blockade of the Strait of Hormuz - a systemic choke point for global stability. The Bank of England warned Mythos could “crack the whole cyber risk world open,” capable of mapping attack vectors on power grids and financial systems. Yet no regulatory body oversees models of this power. Unlike pharmaceuticals, which require years of trials, AI tools with comparable societal risk are managed solely by corporate discretion.

Krystal Ball argued on Breaking Points that the incident reveals a dangerous regulatory void. “We are betting the stability of the global financial system on the server security of a single company,” she said. The voluntary safety pledges favored by AI labs now look inadequate. Even Anthropic’s selective access list - intended to prevent rivals from cloning the model - has become a liability, creating a shadow class of insiders while leaving smaller entities exposed.

"We are betting the stability of the global financial system on the server security of a single company."

- Krystal Ball, Breaking Points

Sam Altman seized on the breach to undermine Anthropic’s safety-first narrative. In a recent interview, he mocked the company for building “a bomb” and then selling $100 million “bomb shelters” to select clients. The leak proves, he argued, that internal ethics mean little when supply chains are porous. The real vulnerability isn’t the model’s capability - it’s the illusion of control.

Six weeks after the White House meeting, the political response remains fragmented. No executive order or international framework has emerged to classify or control dual-use AI models. Meanwhile, the incident fuels broader skepticism about self-policing in AI. If a model this dangerous can slip through vendor defenses, the entire premise of trusted access is compromised.

"The leak suggests the industry's focus on internal safety protocols might be eclipsed by the simple failure of third-party security."

- Nathaniel Whittemore, The AI Daily Brief

The Mythos breach isn’t just a security failure. It’s a turning point - one that forces governments to choose between innovation and systemic risk. For now, the rules remain unwritten, and the most powerful tools are already in the wild.

Source Intelligence

- Deep dive into what was said in the episodes

4/23/26: Global Alarms Over New AI, Kalshi Insider Trading, Tucker Apologizes For Trump SupportApr 23

  • Krystal argues that powerful AI models, unlike medical drugs, lack federal scrutiny, contrasting the rigorous approval process for pharmaceuticals with the hands-off approach to AI development.
  • Krystal advocates for a presidential advisory body to establish transparent review standards for powerful AI models, arguing against developers solely determining their safety for global impact.
  • Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wilds to discuss AI, including Mythos, even though the Trump administration blacklisted Anthropic's Claude AI model.
  • Saagar describes how a person used a hairdryer to manipulate a Paris airport weather sensor on Polymarket, winning $34,000 by artificially raising the temperature to 22 degrees Celsius twice.
  • Krystal challenges the "societal utilitarian benefit" of prediction markets like Kalshi, asserting they primarily enable speculative betting without offering broader public utility.
  • Krystal criticizes the Trump administration's removal of the $25,000 cash limit for day traders, citing empirical data that most retail traders are financially "wiped out" within two to three months.
  • Krystal notes the $25,000 day trading limit was implemented after the 1990s dot-com crash, when numerous retail investors lost savings, underscoring the purpose of consumer protection rules.
  • Krystal contends that the "democratization of finance" offered by platforms like Robinhood and Kalshi primarily enriches the companies through transaction fees, while most individual consumers lose money.
  • Saagar asserts that human psychological vulnerabilities, particularly the "get rich quick" desire, are easily exploited by addictive platforms, underscoring the need for consumer protection.
  • Mark Moran claimed he intentionally bet $100 on himself on Kalshi to expose corruption and insider trading on prediction markets, citing potential manipulation on Polymarket's New York City mayoral race.
  • Krystal asserts Kalshi's slow detection of Mark Moran's insider trading shows enforcement challenges, particularly in monitoring related parties like consultants or family members.
  • Jacob Wasserman clarifies TMZ pays for photos and videos, akin to other news outlets using Getty images, but emphasizes they do not pay for information, relying on reporting and FOIA requests.
Also from this episode: (10)

Models (5)

  • Saagar explains Anthropic's Mythos AI model can identify and exploit vulnerabilities in critical infrastructure like banks and power grids, raising concerns about its potential for misuse.
  • Anthropic decided not to widely release its powerful Mythos model, sharing it only with eleven US organizations and Britain, triggering global alarm over potential security risks.
  • The Bank of England governor warned Mythos could "crack the whole cyber risk world open," while Canada's finance minister compared its threat to closing the Strait of Hormuz.
  • Saagar notes Bloomberg reported Mythos was accessed by an unauthorized Discord hacker collective, highlighting concerns about the model's security despite Anthropic's precautions.
  • Krystal emphasizes that AI's ability to perfectly spoof voices will escalate existing spam call and text issues, making individuals, especially public figures, vulnerable to sophisticated financial scams.

Safety (2)

  • Krystal rejects the idea that AI companies exaggerate dangers for marketing, pointing to global alarms from banks and intelligence agencies as proof of genuine concern.
  • Saagar recognizes Anthropic CEO Dario Amodei's credibility for prioritizing AI safety, noting his refusal of Pentagon demands and the company's research into model vulnerabilities.

Society (1)

  • Krystal observes that societal impacts of technology, like the iPhone's widespread adoption taking four years until 2011, suggest AI's full effects will also unfold over time.

Digital Sovereignty (1)

  • Krystal, having owned an iPhone 4 sixteen years ago, voices skepticism about technology's promises of improvement, stating she is not "better off" with a smartphone.

Banking (1)

  • Saagar warns a major cyber incident impacting digitally-reliant banking systems could destabilize the global financial system and erode public trust in monetary security.

What GPT Images 2 UnlocksApr 22

Also from this episode: (13)

Startups (3)

  • SpaceX partnered with Cursor, an AI coding tool, acquiring rights to purchase Cursor for $60 billion later this year; if the acquisition fails, SpaceX will pay Cursor $10 billion for their collaborative work.
  • The SpaceX-Cursor deal potentially solves Cursor's reported issue of losing money on every Claude and OpenAI token served, giving them access to XAI's Colossus training supercomputer with millions of H100 equivalent units for in-house model development.
  • XAI could benefit from Cursor by gaining a significant data pipeline to improve its models, especially since XAI has struggled to generate revenue or release impactful models, and lacks a footprint in the AI coding space.

Markets (1)

  • SpaceX's IPO disclosure documents reveal Elon Musk increased his stake by $1.4 billion and could receive a compensation package tied to market cap achievements ranging from $1.1 trillion to $6.6 trillion.

Safety (1)

  • An unauthorized group accessed Anthropic's Claude Mythos preview via a third-party vendor and information from the Merkle data breach, despite Anthropic's tight control measures for cybersecurity purposes.

Agents (3)

  • Sam Altman criticized Anthropic's promotion of Mythos, suggesting its fear-based marketing positions AI control as a justifiable purchase, rather than focusing on legitimate safety concerns.
  • Google released an upgrade to its Deep Research agents, now featuring MCP support for third-party data and the ability to output charts and infographics using Nano Banana models, with a Max version outperforming GPT 5.4 and Opus 4.6.
  • The improvements in Google's Deep Research agents, despite still using Gemini 3.1 Pro under the hood, stem entirely from harness upgrades and additional inference, not a more advanced base model.

Models (5)

  • OpenAI's new ChatGPT Images 2.0 model leads the Arena Elo score human preference board with a record-breaking 242-point lead over the previous leader, indicating a significant jump in quality.
  • GPT Images 2.0 offers enhanced precision and control, handling small text, UI elements, and dense compositions at resolutions up to 2K, along with multilingual capabilities for designs where language is integrated.
  • Nathaniel Whittemore argues ChatGPT Images 2.0 is the first image model for the 'agentic era' because its primary impact will come from integration with other systems, rather than standalone viral moments.
  • Users are already integrating GPT Images 2.0 with Codex, creating a pipeline to generate UI mockups and then convert them into working code, addressing Codex's previous limitations in UI design.
  • While GPT Images 2.0 shows vast improvements, Boyan Tongues noted visual artifacts, and Sharon Goldman's sister found anatomical inaccuracies in medical images, highlighting a zero-tolerance for errors in certain use cases.

White hat, black box: AI’s next chapterApr 22

  • Alex Hearn notes the dual-use nature of AI systems, becoming capable hackers, which makes Anthropic's voluntary behind-closed-doors approach a potential model for government regulation in the sector.
  • Bassiru Jumaie Fayet became Senegal's president in 2024, facing public debt at 130% of GDP, forcing him to raise taxes, cut agencies, and pause infrastructure to avoid default.
Also from this episode: (9)

Models (2)

  • Alex Hearn reports Anthropic's new Mythos AI, a "superhuman hacker," is too dangerous for general release, leading the company to provide preview access to 11 named groups and 40 smaller organizations.
  • Alex Hearn highlights Mythos's capabilities, citing its discovery of a complex bug in OpenBSD that had remained hidden for 27 years, demonstrating its advanced software engineering and hacking prowess.

Safety (1)

  • Anthropic's decision to restrict Mythos access is partly to present itself as safety-oriented, manage a compute crunch, and prevent other labs from using its intellectual property to develop "fast followers."

Elections (3)

  • Kira Huyu reports a significant shift in Indian elections, with women becoming a central electoral force whose turnout surpassed male turnout for the first time in 2019 and again in 2024.
  • Research by Sanjay Kumar and colleagues indicates Indian women vote pragmatically, driven by tangible welfare policies rather than ideology or culture war issues, which contrasts with male voting patterns.
  • Kira Huyu explains that at least 16 of India's 28 states have female-only direct cash transfer schemes, often introduced before elections, providing $9 to $27 monthly to women half as likely to hold jobs.

Politics (1)

  • India's states spent over $18 billion on unconditional cash transfers last financial year; West Bengal’s flagship Lakshmi Bandha scheme consumes 10% of its revenue, raising concerns about crowding out education and healthcare investment.

Sports (2)

  • John Fasman notes Senegal is making its third consecutive and fourth overall World Cup appearance, reaching the quarterfinals in 2002 as one of only four African countries to achieve this.
  • Senegal won its second Africa Cup of Nations title by beating Morocco 1-0 in January, but the win was forfeited after players briefly left the pitch in protest of a penalty.