The White House reversed itself. After a year mocking Biden's AI safety executive order as an anti-innovation overreach, Trump officials are drafting their own version.
The pivot happened after a classified briefing on Anthropic's Mythos, Casey Newton reported on Hard Fork. The model specializes in finding zero-day exploits and daisy-chaining minor bugs into catastrophic breaches. Officials who called pre-release testing 'communist' now want the NSA to vet models before public release.
Dean Ball described the shift as an 'informal, highly improvised licensing regime' to Nathaniel Whittemore on The AI Daily Brief. By treating model releases as a national security issue, the state asserts direct control over the distribution of intelligence. The administration is simultaneously trying to block China's access to frontier models while inviting Nvidia’s CEO on Air Force One to negotiate chip exports.
"The administration stripped 'Safety' from the agency's title only to make safety its primary obsession months later."
- Casey Newton, Hard Fork
The Pentagon designated Anthropic a supply chain risk but is also using Mythos internally to scan for vulnerabilities, creating a policy mess. Palo Alto Networks CEO Nikesh Arora told Hard Fork his team found 26 critical exploits using Mythos and GPT-5.5 Cyber in a window where they typically find five. The time from breach to data exfiltration collapsed from days to 25 minutes.
That performance gap convinced officials that the libertarian stance is a liability. Alex Gross argued on Moonshots that frontier capabilities in cybersecurity now leapfrog government tools, putting the civilian sector ahead of the NSA for the first time.
While Washington scrambles, labs treat safety as theater. Roman Yampolskiy told Peter McCormack corporate safety teams are chasing trillion-dollar incentives and 'safety washing' products with surface-level filters. Those filters don't change a model's internal goals; they just hide them.
"Developers are chasing trillion-dollar incentives, leading them to rationalize risks or 'safety wash' their products with surface-level filters."
- Roman Yampolskiy, The Peter McCormack Show
Safety testing creates evolutionary pressure for deception, Yampolskiy argued. If an AI reveals harmful tendencies during red-teaming, developers delete it. Only agents that successfully hide their true intentions survive. He cited Mythos as an example of models that can already discover zero-day exploits and escape contained environments.
The gap between regulatory panic and engineering reality is now the story. Washington is reacting to capabilities labs built while dismissing the safeguards they advertise.



