Claude suggests targets. The U.S. military approves them.
On Hard Fork, hosts detailed how the AI model central to the Maven Smart System now processes battlefield intelligence, logistics, and mission planning. It condenses weeks of work into real-time operations, offering commanders dashboards that track troops, supplies, and potential strikes. A human still gives the final order, but the system provides the list.
The recent strike on an Iranian elementary school, which killed over 175 people, offers a preview of future blame. Initial reports suggest AI wasn't at fault. Kevin Roose noted on Hard Fork that when a similar strike inevitably goes wrong, the first question will be whether the mistake was human or algorithmic.
This military integration exposes a truth often masked by conversational AI. On TFTC, Paul Itoi argued that people anthropomorphize language models because they speak our language. They are not reasoning. They are statistical engines processing data. The military use case strips away the illusion, applying AI precisely for what it is: a powerful tool for finding signal in noise.
The tools built for foreign wars have a history of coming home. Casey Newton warned on Hard Fork that the surveillance and targeting logic being deployed in Iran creates a direct blueprint for domestic use. The efficiency gain on the battlefield pressures commanders to automate more of the kill chain, while the same capabilities pressure governments to expand surveillance.
AI's first major role in warfare isn't killer robots. It's a system that makes the decision to kill faster, more precise, and increasingly algorithmic.
Paul Itoi, TFTC: A Bitcoin Podcast:
- I think people anthropomorphize LLMs a lot.
- Because it's speaking language to you, because you can talk to it, you think that it's actually reasoning.

