Anthropic’s unreleased Mythos model has triggered a quiet panic in enterprise security. According to internal briefings shared on Nerd Snipe, the model recently identified a 27-year-old vulnerability in OpenBSD - long considered one of the most secure operating systems. The discovery wasn’t theoretical: Mythos reverse-engineered the exploit chain from source code alone, requiring no human guidance beyond a high-level prompt. This capability has led Anthropic to restrict access through 'Project Glasswing,' offering the model to just 40 major tech firms so they can patch before public release.
The stated rationale is safety. But Brett Winton on FYI argues the 100-day quarantine is less about ethics than economics. Third-party tests show GPT-5.4 can replicate many of Mythos’ findings, undermining claims of a qualitative leap. Instead, Winton sees a calculated marketing play: by branding Mythos as too dangerous for general use, Anthropic creates urgency among enterprises willing to pay millions for early access. The move mirrors Dario Amodei’s GPT-2 strategy - danger as demand generation.
Behind the scenes, compute constraints are likely the real bottleneck. Public reporting indicates OpenAI has secured significantly more H100s for inference than Anthropic. While Mythos advanced software engineering performance by a year overnight, the 100-day delay erodes that lead to an eight-month advantage. If Anthropic can’t scale, customers will defect to OpenAI or Gemini, where capacity keeps pace with adoption.
"Hacking isn't a separate skill anymore; it is an emergent property of elite coding ability."
- Theo, Nerd Snipe with Theo and Ben
The shift changes who can attack. As Ben notes, you no longer need deep expertise in iOS kernels or browser engines - just enough tokens and intent. Mythos acts as a force multiplier, granting novice users access to exploits previously reserved for state actors or elite hackers. This lowers the barrier to sophisticated cyberattacks, turning any motivated individual with API access into a potential threat vector.
Meanwhile, the software development model is collapsing. Ben replaced a months-long CLI tool build with a 30-line Markdown file that instructs an agent to manage its own sandbox. The code isn’t written - it’s prompted. Theo argues most startups are over-engineering: if your product can’t be reduced to a single skill file, you’re not pushing agents hard enough. Even Robert C. Martin - 'Uncle Bob' - now advocates for voice-to-code, calling semicolons a distraction. The old rigidity of Clean Code is giving way to agentic fluidity.
The real test isn’t technical - it’s systemic. As AI agents begin transacting autonomously, trust becomes critical. Winton predicts a shift toward verified agent-to-agent networks, where your AI only interacts with vetted counterparts. Public algorithmic feeds have eroded real connection; the next layer isn’t more content, but secure, authenticated relationships. In that world, the biggest risk isn’t a bug - it’s an agent you thought you could trust.

