Anthropic’s technical dominance in enterprise AI is matched by its political strategy: lobbying Washington for regulations that would lock out smaller competitors. On *All-In*, David Sacks accused the lab of using safety concerns as a cover for regulatory capture. He argues proposed permissioning regimes for model releases would create insurmountable moats, benefiting incumbents like Anthropic while crushing startups.
This political maneuvering runs parallel to a major technical leap. A leaked draft blog post confirmed Claude Mythos, a model Anthropic calls a “step change” in reasoning and coding. Nathaniel Whittemore reported on *The AI Daily Brief* that the compute-intensive model is initially limited to security researchers, who will map its advanced cyber capabilities before wider release. Its focus on heavy-duty coding reinforces Anthropic’s enterprise-first strategy.
David Sacks, All-In with Chamath, Jason, Sacks & Friedberg:
- Anthropic is sort of the most AGI-pilled of all the frontier labs.
- They made this bet on coding as their way to get to recursive self-improvement.
The labs are also diverging commercially. Chamath Palihapitiya noted OpenAI’s revenue is three-quarters consumer subscriptions, while Anthropic’s is almost the opposite - dominated by developer API sales. These distinct business models mean they aren't yet direct substitutes, but the pressure for an IPO is forcing both to prioritize enterprise profitability over experimental features like OpenAI’s shelved “adult mode.”
The result is a two-front war: one for technical supremacy in coding and cybersecurity, and another in Washington to shape a regulatory landscape that determines who gets to compete.

