The U.S. military is forcing a choice on Silicon Valley: build for the Pentagon or be locked out. When AI lab Anthropic stipulated its models could not be used for autonomous weapons or mass surveillance, Secretary of War Pete Hegseth blacklisted the company as a supply chain risk. According to The Intelligence, this aggressive stance treats safety restrictions as a threat to national security.
Other tech firms are embracing the demand. Venture capital is pouring into defense-focused 'neoprimes' like Anduril, which consolidated various Army contracts into a single deal worth up to $20 billion. Palantir is already using Anthropic’s Claude models for classified military work despite the lab’s public ban. The goal is to integrate AI deeper into the 'kill chain,' moving toward autonomous lethality.
“The administration views 'safety' restrictions as a direct threat to national security readiness.”
- Henry Trix, The Intelligence from The Economist
This pivot is accelerating an AI-first military doctrine. Bitcoin And host David Bennett highlights that the Pentagon has certified eight tech giants, including Google and Microsoft, to run advanced models on secret networks. Over 1.3 million personnel have already generated tens of millions of prompts to deploy AI agents, aiming for what the military calls “decision superiority.”
On The Conversation, David Sacks defended the administration's hardline stance. He argued it was unrealistic for Anthropic to sell to the Department of Defense and then attempt to veto its lawful chain of command. Sacks frames the AI race as an infinite game the U.S. must win against competitors like China, advocating for “permissionless innovation” over a safety-first pause.
“Private companies cannot sell to the Department of Defense and then attempt to veto its lawful chain of command.”
- David Sacks, The Conversation with Dasha Burns
Critics see a dangerous revival of the 'body count' fallacy. Robert Evans of Behind the Bastards points to Project Maven, a system designed to make 1,000 targeting decisions per hour, giving human operators just 72 seconds to vet each strike. He cites the 2026 bombing of the Monob Girls Elementary School in Iran, which killed 156 people, as evidence that algorithmic speed sacrifices strategic judgment for raw lethality.
The ethical schism is widening. As venture-backed firms secure futures in autonomous weapons, labs insisting on guardrails face exclusion, reshaping not only the defense industry but the foundational ethics of American AI.


