Anthropic is not releasing its most powerful AI. The model, codenamed Mythos, can find 27-year-old bugs in OpenBSD and map attack paths through core financial and energy systems. Dario Amodei’s team isn’t selling it. They’re rationing it to just 11 systemically important firms - Apple, JP Morgan, Microsoft - and the UK government.
The containment strategy failed. According to Saagar Enjeti on Breaking Points, a hacker collective on Discord now has access to the model. That breach turns Anthropic’s safety-first posture into a cautionary tale: one server crack can unleash AI-driven cyberattacks at industrial scale.
"The Bank of England has already warned that Mythos could crack the entire cyber risk world open."
- Saagar Enjeti, Breaking Points with Krystal and Saagar
Alex Hearn on The Intelligence argues the gatekeeping serves more than safety. It masks a compute crunch and blocks Chinese rivals from cloning the model’s outputs. But the deeper shift is structural: AI has moved from writing code to finding zero-days at superhuman speed. If Mythos can dismantle OpenBSD, it can dismantle JPMorgan’s firewall.
Governments are unprepared. Canada’s finance minister likened the threat to a blockade of the Strait of Hormuz - same strategic impact, no military action required. Yet no agency regulates AI models that can collapse critical infrastructure. Medical drugs face years of trials. AI tools face a startup CEO’s judgment.
"We are betting the stability of the global financial system on the server security of a single company."
- Krystal Ball, Breaking Points with Krystal and Saagar
Cat Wu, Head of Product for Claude Code, says Anthropic ships features in 24-hour sprints. But when the feature is a model that can automate systemic attacks, speed becomes a liability. The leak proves that even the most safety-obsessed lab can’t secure what it chooses to build. The question isn’t whether Mythos should be released. It’s whether anyone should have been allowed to build it.



