03-16-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI chooses targets, humans pull the trigger

Monday, March 16, 2026 · from 2 podcasts
  • AI has moved from logistics to operational targeting, with systems like Palantir's Maven Smart System now integrated with Claude to suggest missile strike targets.
  • The primary battlefield value is processing intelligence data to shrink weeks-long planning into real-time dashboards, not building autonomous weapons.
  • This shift creates a new paradigm for blame: when a strike goes wrong, the first question is whether the mistake was human or algorithmic.

The battlefield has been digitized. AI's first major role in warfare isn't killer robots but a powerful intelligence and targeting system built on processing floods of data. On Hard Fork, Kevin Roose detailed the integration of tools like Claude into the U.S. military's Project Maven, a system that now suggests hundreds of targets and issues precise coordinates for strikes.

The immediate value is shrinking haystacks. Casey Newton pointed to intelligence operations, like hacking Tehran's traffic cameras, that generate overwhelming raw data. AI processes this into dashboards tracking troops and supplies, performing the dull work of finding signal in noise. A human still gives the final order, but the AI provides the target list and the confidence to act.

The recent strike on an Iranian elementary school, killing over 175 people, has intensified the question of responsibility. Roose noted it's a preview of future blame games. When a strike goes wrong, the first investigation will determine if the mistake was human or algorithmic.

This operational shift rests on a fundamental technical limitation. On TFTC, Brian Murray and Paul Itoi discussed AI's core weakness: memory. These systems don't retain context between sessions, treating each prompt as an isolated event. The industry has poured capital into scaling language models, but Itoi argues this is a misdirection. People anthropomorphize LLMs because they speak our language, but they are not reasoning.

The military application bypasses this by feeding AI a continuous, real-time stream of battlefield data. It doesn't need to remember yesterday's conversation; it needs to process today's surveillance footage. The tools perfected for foreign wars have a habit of coming home. Newton warned the same surveillance and targeting logic deployed in Iran is a direct blueprint for domestic use.

The real breakthrough in AI won't be better language models but tools that finally remember what you said yesterday. For the military, the breakthrough is a system that never forgets what it saw today.

Kevin Roose, Hard Fork:

- The use of Maven and Claude has turned weeks-long battle planning into real-time operations.

- This is not just like a kind of tool that people in the military are using for handling like routine office work.

Entities Mentioned

Claudemodel
ObsidianProduct
PalantirCompany
Project MavenConcept

Source Intelligence

What each podcast actually said

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

Also from this episode:

Models (8)
  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s IdentityMar 13

  • The first major battlefield role for AI is intelligence and targeting systems, not autonomous weapons, using data processing to shrink massive data haystacks for human operators.
  • U.S. military systems now integrate Claude into classified intelligence platforms to suggest hundreds of targets and issue precise coordinates for strikes, with a human giving final authorization.
  • Kevin Roose notes the integration of Claude into Palantir's Maven Smart System has compressed weeks of battle planning into real-time operational decision-making.
  • Casey Newton points to Israeli intelligence operations, like hacking Tehran's traffic cameras, as examples of data floods that AI systems are built to process for tracking troops and supplies.
  • The core value of battlefield AI is performing the dull, critical work of finding signal in noise for intelligence, logistics, and mission planning dashboards.
  • Kevin Roose argues that incidents like the strike on an Iranian elementary school preview future blame games where the first question will be whether a mistake was human or algorithmic.
  • Casey Newton warns that the surveillance and targeting logic perfected for foreign wars, such as in Iran, creates a direct blueprint for future domestic use, threatening civil liberties.