AI has moved from the lab to the battlefield, directly informing missile strikes.
On Hard Fork, Casey Newton and Kevin Roose detailed how tools like Claude are integrated into U.S. military intelligence systems. AI now condenses weeks of battle planning into real-time operations, processing vast data to suggest targets and issue precise coordinates. Humans still give the final order, but the AI provides the target list.
This immediate military application carries broad geopolitical risks. On The Tucker Carlson Show, Colonel Douglas McGregor warned that conflicts where AI is deployed, like the current one in Iran, functionally close key chokepoints like the Strait of Hormuz, threatening the petrodollar. McGregor argued the lesson for nations observing is simple: get nuclear weapons or risk regime change.
This sophisticated military use contrasts sharply with the consumer AI industry's reality. Podcasting 2.0 highlighted OpenAI CEO Sam Altman's vague redefinition of AGI, while describing a business model based on getting developers "addicted" to tools and then dramatically raising prices. This mirrors another dual-use concern: AI's potential for societal manipulation and dependency.
Kevin Roose on Hard Fork warned that military tools perfected abroad often come home, creating blueprints for domestic surveillance. In response to these capabilities, Stacker News Live reported on former President Trump's order for federal agencies to immediately stop using specific AI vendors, signaling a direct governmental push for control.
The line between battlefield AI and everyday technology is blurring, demanding urgent clarity on accountability and regulation.
Kevin Roose, Hard Fork:
- The use of Maven and Claude has turned weeks-long battle planning into real-time operations.
- This is not just like a kind of tool that people in the military are using for handling like routine office work.




