Artificial intelligence has reached its first major battlefield milestone, and it’s not a killer robot. On Hard Fork, hosts detailed how AI, specifically Anthropic’s Claude, is now integrated into the U.S. military's Palantir-built Maven Smart System. Its role is processing floods of battlefield data from traffic cameras and signals intelligence to suggest hundreds of targets and issue precise coordinates, compressing weeks of planning into real-time operations.
This shift moves the kill chain toward automation. A human still gives the final order, but the AI provides the target list and the confidence to act. Kevin Roose pointed to the recent strike on an Iranian elementary school as a preview of future blame games. When a strike goes horribly wrong, the first question will be whether the mistake was human or algorithmic.
Meanwhile, the foundational tools powering this revolution are broken in basic ways. On TFTC, Brian Murray and Paul Itoi dissected the core frustration of AI assistants: they have no memory. Murray described his daily ritual of reloading context just to get a coherent response. Itoi argued the industry’s obsession with scaling language models is a misdirection. The real breakthrough is not better prediction, but practical systems that can remember.
These technical limitations exist alongside a cynical business model. On Podcasting 2.0, Adam Curry recounted Sam Altman’s admission that the term ‘Artificial General Intelligence’ has lost meaning. The real plan, according to Curry, is to get developers hooked on the tool and then dramatically raise prices. Dave Jones described the local AI scene as a ‘pile of stinking bullcrap,’ filled with overhyped, useless agents and de-censored models.
The Presidio Bitcoin Jam highlighted another blind spot: open-source AI often remains centralized in practice, controlled by a few entities who manage the data, compute, and distribution. True decentralization is more aspiration than reality.
These threads converge on a single point. The AI being deployed for wartime targeting is the same brittle, amnesiac technology being sold to consumers and developers. Its power comes from processing speed and pattern matching, not understanding or memory. The ethical crisis isn't a future scenario about robots. It's happening now, as tools with profound limitations and addictive business models are handed the keys to the battlefield.
Sam Altman, Podcasting 2.0:
- The definition of AGI really matters. Some people would say we already got there.
- But in any case, that word has ceased to have much meaning.



