The Trump administration is reversing course on AI deregulation. According to the New York Times, officials are considering a mandatory federal review that could block the release of frontier models - a policy born from fear, not foresight. The catalyst was Anthropic’s private demo of its Mythos model, which reportedly could autonomously find software vulnerabilities, prompting White House fears of a catastrophic, AI-enabled cyberattack.
This is a sharp pivot from the administration’s earlier stance championing speed over safety to beat China. According to Nathaniel Whittemore on The AI Daily Brief, this potential executive order directly contradicts public statements from Vice President J.D. Vance and President Trump, who have argued against “foolish rules” that could stifle the industry. Critics like Zach Lilly of NetChoice warn government vetting will kill American competitiveness.
"This potential executive order is a stark reversal from the Trump administration's previous stance of removing power from the USAI Safety Institute and shifting safety testing to a voluntary system."
- Nathaniel Whittemore, The AI Daily Brief
Simultaneously, the labs are moving to solve a different problem. OpenAI is raising billions to launch a deployment company, while Anthropic formed a $1.5 billion joint venture with Blackstone and Goldman Sachs. These aren’t side projects but formal consulting arms to embed engineers in client offices. Whittemore argues this stems from a core realization: there is no AI transformation without organizational transformation. Microsoft data shows only 19% of companies have both high individual AI capability and high organizational readiness.
In a parallel diplomatic track, the State Department is dismantling its internal censorship apparatus. Undersecretary Sarah Rogers, speaking on the a16z Show, now promotes an “AI with a Western soul” - models that reason individualistically and prioritize user consent. She sees proliferating this stack as a top soft power tool and warns against foreign regulations, like those in the EU, that force architectural transparency or impose vague risk assessments for “hate speech.”
"She believes proliferating a Western AI stack is the top soft power tool for the US and a priority for anyone who values freedom, arguing AI's underlying reasoning model will dictate global communication and commerce."
- Sarah Rogers, The a16z Show
The military front shows where these priorities collide. According to The Intelligence, the Department of War blacklisted Anthropic as a supply chain risk after the lab stipulated its models not be used for autonomous weapons. Palantir is already using Claude models for classified military work regardless, and SpaceX acquired xAI for Pentagon contracts. The administration’s close ties to these “neoprimes” risk turning defense tech into a partisan wedge, threatening the bipartisan funding these firms rely on.
Washington’s emerging AI posture is a three-pronged scramble: regulate the scary unknowns, subsidize corporate adoption, and weaponize the technology for influence and war.


