04-25-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Mythos leak forces AI safety reckoning

Saturday, April 25, 2026 · from 3 podcasts, 4 episodes
  • Anthropic’s ultra-powerful Mythos AI leaked to hackers, exposing power grids and banks to automated attacks.
  • Governments compare the threat to a Strait of Hormuz blockade - yet no agency regulates such models.
  • Sam Altman mocks Anthropic’s 'fear-based marketing' as the industry’s self-policing facade cracks.

Anthropic’s most advanced AI, Mythos, was never meant to escape. Designed to find hidden software vulnerabilities - like a 27-year-old OpenBSD bug - it was shared with just 11 US institutions and the UK government. That control failed. A hacker group on Discord accessed the model, weaponizing a tool built for defense into a blueprint for cyberwarfare.

The breach reveals a dangerous illusion: that AI safety can be maintained through secrecy and elite access. The Bank of England warns Mythos could "crack the whole cyber risk world open." Canada’s finance minister equates the threat to a blockade of the Strait of Hormuz. These are not hypotheticals. The model’s ability to automate attacks on critical infrastructure means the damage is already in motion.

On The AI Daily Brief, Nathaniel Whittemore noted the leak undercuts Anthropic’s "safety-first" branding. If the model is as dangerous as claimed, its exposure via a third-party vendor is a catastrophic failure. Sam Altman seized the moment, calling Anthropic’s strategy "fear-based marketing" - selling $100 million "bomb shelters" while building the bomb.

"We are betting the stability of the global financial system on the server security of a single company."

- Krystal Ball, Breaking Points

The regulatory void is glaring. No body reviews or licenses models like Mythos. A medical drug requires years of trials for a small patient group; a tool that can collapse a bank’s firewall is governed by a startup CEO’s discretion. Krystal Ball argues for a presidential advisory body to set standards - "transparent review," not self-policing.

Meanwhile, the Pentagon labels Anthropic a supply chain risk, while the NSA uses Mythos in secret. This contradiction, reported by Axios, shows inter-agency chaos. President Trump signaled détente after a White House meeting, calling the team "very smart," even as his administration previously blacklisted Claude.

"If Mythos can map a roadmap for hackers to attack power grids, the time for voluntary safety pledges has passed."

- Saagar Enjeti, Breaking Points

The myth of containment is over. AI this powerful cannot be gated by promises. The leak proves the need for binding oversight - before the next exploit hits a live grid.

Source Intelligence

- Deep dive into what was said in the episodes

4/23/26: Global Alarms Over New AI, Kalshi Insider Trading, Tucker Apologizes For Trump SupportApr 23

  • Saagar explains Anthropic's Mythos AI model can identify and exploit vulnerabilities in critical infrastructure like banks and power grids, raising concerns about its potential for misuse.
  • Anthropic decided not to widely release its powerful Mythos model, sharing it only with eleven US organizations and Britain, triggering global alarm over potential security risks.
  • The Bank of England governor warned Mythos could "crack the whole cyber risk world open," while Canada's finance minister compared its threat to closing the Strait of Hormuz.
  • Saagar notes Bloomberg reported Mythos was accessed by an unauthorized Discord hacker collective, highlighting concerns about the model's security despite Anthropic's precautions.
  • Krystal rejects the idea that AI companies exaggerate dangers for marketing, pointing to global alarms from banks and intelligence agencies as proof of genuine concern.
  • Krystal argues that powerful AI models, unlike medical drugs, lack federal scrutiny, contrasting the rigorous approval process for pharmaceuticals with the hands-off approach to AI development.
  • Krystal advocates for a presidential advisory body to establish transparent review standards for powerful AI models, arguing against developers solely determining their safety for global impact.
  • Saagar recognizes Anthropic CEO Dario Amodei's credibility for prioritizing AI safety, noting his refusal of Pentagon demands and the company's research into model vulnerabilities.
  • Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wilds to discuss AI, including Mythos, even though the Trump administration blacklisted Anthropic's Claude AI model.
  • Saagar warns a major cyber incident impacting digitally-reliant banking systems could destabilize the global financial system and erode public trust in monetary security.
  • Krystal emphasizes that AI's ability to perfectly spoof voices will escalate existing spam call and text issues, making individuals, especially public figures, vulnerable to sophisticated financial scams.
  • Saagar describes how a person used a hairdryer to manipulate a Paris airport weather sensor on Polymarket, winning $34,000 by artificially raising the temperature to 22 degrees Celsius twice.
  • Krystal challenges the "societal utilitarian benefit" of prediction markets like Kalshi, asserting they primarily enable speculative betting without offering broader public utility.
  • Krystal criticizes the Trump administration's removal of the $25,000 cash limit for day traders, citing empirical data that most retail traders are financially "wiped out" within two to three months.
  • Krystal notes the $25,000 day trading limit was implemented after the 1990s dot-com crash, when numerous retail investors lost savings, underscoring the purpose of consumer protection rules.
  • Krystal contends that the "democratization of finance" offered by platforms like Robinhood and Kalshi primarily enriches the companies through transaction fees, while most individual consumers lose money.
  • Saagar asserts that human psychological vulnerabilities, particularly the "get rich quick" desire, are easily exploited by addictive platforms, underscoring the need for consumer protection.
  • Mark Moran claimed he intentionally bet $100 on himself on Kalshi to expose corruption and insider trading on prediction markets, citing potential manipulation on Polymarket's New York City mayoral race.
  • Krystal asserts Kalshi's slow detection of Mark Moran's insider trading shows enforcement challenges, particularly in monitoring related parties like consultants or family members.
  • Jacob Wasserman clarifies TMZ pays for photos and videos, akin to other news outlets using Getty images, but emphasizes they do not pay for information, relying on reporting and FOIA requests.
Also from this episode: (7)

Society (1)

  • Krystal observes that societal impacts of technology, like the iPhone's widespread adoption taking four years until 2011, suggest AI's full effects will also unfold over time.

Digital Sovereignty (1)

  • Krystal, having owned an iPhone 4 sixteen years ago, voices skepticism about technology's promises of improvement, stating she is not "better off" with a smartphone.

Elections (3)

  • Saagar reports that Kalshi identified three instances of political insider trading, with candidates in Minnesota, Texas, and Virginia primaries betting on their own election outcomes.
  • Saagar introduces Mark Moran, a Virginia US Senate candidate who switched from Democrat to Independent, as one of the individuals identified by Kalshi for insider trading on his own race.
  • Krystal finds Mark Moran's explanation plausible, suggesting his $100 bet effectively achieved his goal of gaining attention as an unknown independent candidate.

Psychology (1)

  • Krystal explains that casinos and social media companies purposefully study and integrate "dopamine cycles" into their products to addict users and facilitate continuous monetary extraction.

Mental Health (1)

  • Krystal emphasizes that money problems are the leading cause of divorce in North America and a significant factor in suicides in the US, linking financial distress to severe personal consequences.

What GPT Images 2 UnlocksApr 22

  • An unauthorized group accessed Anthropic's Claude Mythos preview via a third-party vendor and information from the Merkle data breach, despite Anthropic's tight control measures for cybersecurity purposes.
  • Sam Altman criticized Anthropic's promotion of Mythos, suggesting its fear-based marketing positions AI control as a justifiable purchase, rather than focusing on legitimate safety concerns.
  • Google released an upgrade to its Deep Research agents, now featuring MCP support for third-party data and the ability to output charts and infographics using Nano Banana models, with a Max version outperforming GPT 5.4 and Opus 4.6.
  • The improvements in Google's Deep Research agents, despite still using Gemini 3.1 Pro under the hood, stem entirely from harness upgrades and additional inference, not a more advanced base model.
  • Nathaniel Whittemore argues ChatGPT Images 2.0 is the first image model for the 'agentic era' because its primary impact will come from integration with other systems, rather than standalone viral moments.
  • While GPT Images 2.0 shows vast improvements, Boyan Tongues noted visual artifacts, and Sharon Goldman's sister found anatomical inaccuracies in medical images, highlighting a zero-tolerance for errors in certain use cases.
Also from this episode: (7)

Startups (3)

  • SpaceX partnered with Cursor, an AI coding tool, acquiring rights to purchase Cursor for $60 billion later this year; if the acquisition fails, SpaceX will pay Cursor $10 billion for their collaborative work.
  • The SpaceX-Cursor deal potentially solves Cursor's reported issue of losing money on every Claude and OpenAI token served, giving them access to XAI's Colossus training supercomputer with millions of H100 equivalent units for in-house model development.
  • XAI could benefit from Cursor by gaining a significant data pipeline to improve its models, especially since XAI has struggled to generate revenue or release impactful models, and lacks a footprint in the AI coding space.

Markets (1)

  • SpaceX's IPO disclosure documents reveal Elon Musk increased his stake by $1.4 billion and could receive a compensation package tied to market cap achievements ranging from $1.1 trillion to $6.6 trillion.

Models (3)

  • OpenAI's new ChatGPT Images 2.0 model leads the Arena Elo score human preference board with a record-breaking 242-point lead over the previous leader, indicating a significant jump in quality.
  • GPT Images 2.0 offers enhanced precision and control, handling small text, UI elements, and dense compositions at resolutions up to 2K, along with multilingual capabilities for designs where language is integrated.
  • Users are already integrating GPT Images 2.0 with Codex, creating a pipeline to generate UI mockups and then convert them into working code, addressing Codex's previous limitations in UI design.

How Apple's AI Strategy Changes with a New CEOApr 21

  • OpenAI released "Chronicle" for Codex, a memory feature using background screen captures to understand user workflows and improve interactions, though it consumes tokens and raises privacy concerns.
  • Anthropic's new "live artifacts" feature for Cowork enables users to build dynamic dashboards and trackers from live data feeds, demonstrated for personalized briefings and mission control.
  • Dario Amodei met with White House officials, including Susie Wiles and Scott Bessett, to discuss Mythos' cybersecurity implications, a meeting seen by Nathaniel Whittemore as a potential detente after recent hostile rhetoric.
  • Axios reported the NSA is actively using Anthropic's Mythos preview model, despite the Department of Defense classifying Anthropic as a supply chain risk, indicating cybersecurity needs may outweigh inter-agency disputes.
  • AI development platform Vercel disclosed a security incident where Shiny Hunters, a sophisticated criminal group, accessed systems via compromised employee credentials and exfiltrated user data; Guillermo Rauch suspects AI accelerated the attack.
  • Apple initially appeared to lag in AI, but Nathaniel Whittemore notes a "Mac mini renaissance" for open-source agents, and commentators like Ejaz suggest Apple's inaction, licensing Google's Gemini, proved a clever, profitable strategy.
  • Google established a "strike team," involving Sergey Brin, to improve AI coding and agentic execution, focusing on training models on Google's internal codebase to close the gap with Anthropic's 100% AI-written code.
Also from this episode: (6)

Startups (1)

  • DeepSeek is seeking its first outside investment of $600 million for a $10 billion valuation, while Cursor aims for $2 billion in funding at a $50 billion valuation, with Andreessen Horowitz leading and NVIDIA potentially participating.

Chips (1)

  • TSMC reported a 35% revenue boost and forecasts over 30% growth but faces capacity limits, with ASML unable to supply lithography machines. Nikkei Asia predicts memory chip shortages until at least 2027, meeting only 60% of demand.

Business (1)

  • Tim Cook is stepping down after 15 years as Apple CEO, having grown the company from $350 billion to $4 trillion. Polymath notes Apple's 11x market cap increase under Cook lagged other major tech companies during the same period.

AI & Tech (1)

  • Incoming Apple CEO John Ternus faces the "daunting task" of defining Apple's AI strategy, especially after Tim Cook's "lack of decisiveness" marred previous efforts, according to Mark Gurman's sources, despite Apple's hardware strength.

Big Tech (2)

  • Amazon expanded its Anthropic partnership with a $25 billion investment, providing 5 gigawatts of compute, including Tranium 3 chips, to resolve Anthropic's inference shortage and ensure Claude's availability via AWS.
  • Meta is reportedly planning 10% layoffs impacting approximately 8,000 workers, but also launched "Level Up," a free four-week program with CBRE to train fiber technicians for data center construction, addressing an acute labor shortage.

White hat, black box: AI’s next chapterApr 22

  • Alex Hearn reports Anthropic's new Mythos AI, a "superhuman hacker," is too dangerous for general release, leading the company to provide preview access to 11 named groups and 40 smaller organizations.
  • Anthropic's decision to restrict Mythos access is partly to present itself as safety-oriented, manage a compute crunch, and prevent other labs from using its intellectual property to develop "fast followers."
  • Alex Hearn notes the dual-use nature of AI systems, becoming capable hackers, which makes Anthropic's voluntary behind-closed-doors approach a potential model for government regulation in the sector.
  • Bassiru Jumaie Fayet became Senegal's president in 2024, facing public debt at 130% of GDP, forcing him to raise taxes, cut agencies, and pause infrastructure to avoid default.
Also from this episode: (8)

Models (1)

  • Alex Hearn highlights Mythos's capabilities, citing its discovery of a complex bug in OpenBSD that had remained hidden for 27 years, demonstrating its advanced software engineering and hacking prowess.

Elections (3)

  • Kira Huyu reports a significant shift in Indian elections, with women becoming a central electoral force whose turnout surpassed male turnout for the first time in 2019 and again in 2024.
  • Research by Sanjay Kumar and colleagues indicates Indian women vote pragmatically, driven by tangible welfare policies rather than ideology or culture war issues, which contrasts with male voting patterns.
  • Kira Huyu explains that at least 16 of India's 28 states have female-only direct cash transfer schemes, often introduced before elections, providing $9 to $27 monthly to women half as likely to hold jobs.

Politics (1)

  • India's states spent over $18 billion on unconditional cash transfers last financial year; West Bengal’s flagship Lakshmi Bandha scheme consumes 10% of its revenue, raising concerns about crowding out education and healthcare investment.

Sports (2)

  • John Fasman notes Senegal is making its third consecutive and fourth overall World Cup appearance, reaching the quarterfinals in 2002 as one of only four African countries to achieve this.
  • Senegal won its second Africa Cup of Nations title by beating Morocco 1-0 in January, but the win was forfeited after players briefly left the pitch in protest of a penalty.

Immigration (1)

  • Nearly 4% of Senegalese live abroad, with remittances accounting for 10% of the country's GDP, highlighting the diaspora's significant economic contribution.