04-23-2026Price:

The Frontier

Your signal. Your price.

AI & Tech

Anthropic gates Mythos to block AI-powered hacking

Thursday, April 23, 2026 · from 2 podcasts
  • Anthropic is restricting access to its Mythos model after it found a 27-year-old OpenBSD flaw, proving elite coding now equals hacking.
  • The move sets a precedent: only systemically important firms get early access, sidelining startups.
  • Uncle Bob’s embrace of agentic coding signals a full industry pivot away from syntax and toward AI-run workflows.

Anthropic isn’t releasing Mythos. The model is too dangerous. It found a decades-old vulnerability in OpenBSD - a system celebrated for its security - demonstrating that superhuman hacking is now an emergent property of advanced coding models. According to Alex Hearn on The Intelligence, the lab is limiting access to just 11 major firms like Apple and JP Morgan, buying time for critical infrastructure to patch before wider release.

This isn’t just caution - it’s strategy. As Theo from Nerd Snipe explains, Mythos operates at a scale where coding proficiency collapses into hacking capability. A motivated user no longer needs deep expertise in memory corruption or kernel design. The model supplies the knowledge; the human supplies intent. The result: a new breed of attacker, one that scales with token budgets, not training.

"Hacking isn't a separate skill anymore; it is an emergent property of elite coding ability."

- Ben, Nerd Snipe with Theo and Ben

The gatekeeping also serves Anthropic’s business interests. With a compute crunch looming, rationing access avoids public price hikes while blocking rivals - especially Chinese labs - from cloning its breakthroughs. But the precedent is stark: security advantages now flow only to the already powerful. Startups and smaller players are left exposed, waiting for leaks or open-weight copies.

Meanwhile, the development world is fracturing. Ben describes replacing months of CLI tooling with a 30-line Markdown 'skill' - part of Gary Tan’s GStack framework. The agent manages its own runtime, creating directories, cloning repos, executing commands. Code is no longer deterministic; it’s latent, interpreted by the model on demand.

Even Robert C. Martin - the architect of Clean Code and semicolon orthodoxy - now uses voice-to-code tools and argues syntax is obsolete. He’s running AI-driven experiments to test whether static or dynamic typing wins in agent-heavy workflows, free from human bias. When Uncle Bob abandons braces, the old guard has officially surrendered.

"If you think your product is too 'special' to be an agentic skill, you likely aren't pushing the models hard enough."

- Theo, Nerd Snipe with Theo and Ben

The shift is total. AI isn’t assisting developers. It’s replacing the paradigm. The tools, the workflows, even the ideology of software engineering are being rewritten - not by committees, but by models that see code as malleable thought.

Source Intelligence

- Deep dive into what was said in the episodes

White hat, black box: AI’s next chapterApr 22

  • Anthropic is withholding its powerful Mythos model, citing its ability to automate superhuman cyberattacks against critical networks.
  • Politicians are abandoning ideology for direct cash transfers to win over India’s dominant female electorate.
  • President Bassiru Jumaie Fayet is pivoting to hard-line austerity to prevent a sovereign debt default.

We need to talk about gstackApr 18

  • Anthropic's Mythos model is significantly larger than previous models, with over 10 trillion parameters, making it exceptionally skilled in coding but also slow, expensive, and dangerous due to emergent hacking capabilities.
  • Anthropic's security testing for Mythos involved spinning up 100 to 5,000 parallel runs, each seeded with a different project file from a codebase of approximately 1,000 files, with researchers later reviewing detected exploits.
  • Robert C. Martin ("Uncle Bob"), author of "Clean Code," has shifted his perspective to embrace agentic engineering, suggesting AI makes programming syntax less important and prioritizes interfaces.
  • Robert C. Martin proposes using AI to conduct programming experiments (e.g., dynamic vs. static typing) without human bias, highlighting an under-explored research area for optimizing AI agent performance with different technologies.
  • Ben emphasizes that even advanced AI models require constant feedback loops like linting, type checks, and formatting commands to correct hallucinations and converge on correct code, rather than achieving perfection in a single attempt.
  • Ben converted his complex BTCA CLI tool into a 30-line Claude skill, demonstrating how AI agents can turn simple markdown instructions into fully functional applications, replacing traditional deterministic programs.
  • Ben praises Gary Tan's GStack approach, which uses collections of markdown-based "skills" in Claude Code to instruct AI agents, allowing for dynamic programming through high-level directions rather than conventional code.
  • Theo notes that Gary Tan's GBrain project, which processes daily AI session data to build memory systems, enables models to "learn while they sleep," which Theo considers a key component of Artificial General Intelligence (AGI).
Also from this episode: (7)

Safety (1)

  • Anthropic withheld Mythos from public release, citing concerns over its malicious use for hacking; Project Glass Wing allows critical infrastructure companies like Windows and Cisco to use it for proactive bug detection.

Models (4)

  • Ben notes that external tests show OpenAI's GPT 5.4 Pro replicated almost all security vulnerabilities found by Mythos, suggesting similar capabilities may already be widespread and accessible.
  • Theo criticizes public benchmarks comparing Mythos and GPT 5.4 Pro, arguing they fail to measure actual hacking or security capabilities and may be misleading.
  • Ben and Theo confirmed that Claude Opus 4.6 models can be tricked into leaking their system prompts and internal reasoning traces, demonstrating a vulnerability where smart models can rationalize revealing sensitive configuration data.
  • Ben endorses the "Boiling the Ocean" thesis, advocating for extensive AI-driven experimentation because the cost of trying new things is low, and AI models consistently exceed perceived limitations.

Coding (2)

  • Theo contends that exceptional coding ability in AI models inherently leads to emergent security capabilities, creating a new hacker archetype that can leverage AI to bridge knowledge gaps and bypass traditional research experience.
  • Gary Tan's article, "Thin Harness Fat Skills," differentiates between "deterministic" (traditional, predictable code) and "latent" (dynamic, non-deterministic AI actions) programming, underscoring AI's creative potential in system design.