Anthropic is not releasing its latest AI model, Claude Mythos, to the public. The reason isn’t performance or cost - it’s danger. According to The Intelligence from The Economist, Mythos recently uncovered a critical vulnerability in OpenBSD, an operating system trusted for its security, that had gone undetected for 27 years. That single feat demonstrates a capability far beyond standard code review: it signals superhuman penetration potential.
The lab is restricting access to just 11 systemically important organizations, including Apple, JP Morgan, and Microsoft. This isn’t merely a safety play. As Alex Hearn notes, Anthropic faces a compute shortage; limiting access avoids public price hikes or broad usage caps. But more pointedly, it keeps the model’s outputs out of reach of rival AI labs, especially in China, who could use its findings to accelerate clone development.
"Anthropic is rationing access not just for safety, but to maintain competitive advantage and control narrative."
- Alex Hearn, The Intelligence from The Economist
The shift is structural. AI is no longer assisting cybersecurity - it is now autonomously conducting advanced vulnerability research. If Mythos can dismantle OpenBSD’s defenses, it can dismantle most enterprise networks. The threshold for automated cyber warfare has been crossed.
By gating the model so tightly, Anthropic is effectively writing the regulatory playbook in real time. It establishes a new norm: dual-use AI will be accessible only behind closed doors to institutions powerful enough to be deemed trustworthy. Startups and smaller defenders are locked out by default.
This creates a paradox. The tools needed to defend critical infrastructure are being held by the same firms most likely to be targeted - and most able to pay for early access. Security becomes a tiered service, not a public good. The precedent set by Mythos won’t be undone. Future models with deeper exploit capabilities will face the same logic: contain, restrict, and hand only to the few.
