The Trump administration is moving to regulate AI at the point of creation, not just use. Alex Gross on Moonshots reports the White House is drafting an executive order for mandatory government vetting of frontier models before public release. The shift follows what Gross calls a 'sea change' where models like Claude Mythos now leapfrog government agencies in finding cybersecurity vulnerabilities.
"This shift puts the civilian sector ahead of the NSA for the first time."
- Alex Susskind Gross, Moonshots with Peter Diamandis
The urgency stems from dual-use risks now considered acute. On The Intelligence, analyst Arthur Holland-Michel argues AI provides 'uplift,' compressing a decade of team-based pathogen research into a solo project for a mid-level biologist. Current model guardrails - refusal mechanisms - are brittle and easily jailbroken. The proposal would create a working group of tech leaders and officials to review models pre-market, but the policy has already split experts. Brian Elliott warns gatekeeping could cause the US to fall behind geopolitically.
The Pentagon's own capacity to respond is structurally broken. Palantir CTO Shyam Sankar, speaking on American Optimist, says the US expended ten years of weapon production in ten weeks of fighting in Ukraine. He frames the factory as the ultimate weapon, but the defense industrial base has atrophied since the Cold War. In 1989, 94% of defense spending went to dual-use companies like Chrysler. Today, the military relies on a few specialized primes that cannot scale in a crisis.
"The bureaucracy feels threatened by outcomes it cannot control."
- Shyam Sankar, American Optimist
Innovation faces internal sabotage even when it works. Sankar points to Colonel Drew Kukor, who built Project Maven's AI enterprise in a Pentagon basement and faced internal investigations despite success. He argues breakthroughs require 'heretics' who bypass standard procurement - a model at odds with a proposed federal pre-approval regime for all frontier AI. The Pentagon recently signed AI agreements with seven companies, including Google and OpenAI, triggering employee protests over military use.
The risk is a policy mismatch: accelerated threats met by decelerated response. Holland-Michel argues that without fundamental changes to training or access, we're gambling that no one uses AI to build a pathogen and then boards a plane. Sankar believes America's greatest risk is suicide, not homicide - a failure of will and capacity. The White House's move suggests officials see the gamble as too large to leave unregulated, even if it slows the pace of innovation.


