03-15-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI targets for U.S. military

Sunday, March 15, 2026 · from 2 podcasts
  • AI models like Claude are now integrated into U.S. military targeting systems, processing intelligence and suggesting strikes in real time.
  • The military adoption highlights AI's core function as a data processor, not a reasoning agent, despite its conversational interface.
  • The same surveillance and targeting logic developed for foreign wars creates a blueprint for domestic systems, linking battlefield efficiency to future civil liberties.

Claude suggests targets. The U.S. military approves them.

On Hard Fork, hosts detailed how the AI model central to the Maven Smart System now processes battlefield intelligence, logistics, and mission planning. It condenses weeks of work into real-time operations, offering commanders dashboards that track troops, supplies, and potential strikes. A human still gives the final order, but the system provides the list.

The recent strike on an Iranian elementary school, which killed over 175 people, offers a preview of future blame. Initial reports suggest AI wasn't at fault. Kevin Roose noted on Hard Fork that when a similar strike inevitably goes wrong, the first question will be whether the mistake was human or algorithmic.

This military integration exposes a truth often masked by conversational AI. On TFTC, Paul Itoi argued that people anthropomorphize language models because they speak our language. They are not reasoning. They are statistical engines processing data. The military use case strips away the illusion, applying AI precisely for what it is: a powerful tool for finding signal in noise.

The tools built for foreign wars have a history of coming home. Casey Newton warned on Hard Fork that the surveillance and targeting logic being deployed in Iran creates a direct blueprint for domestic use. The efficiency gain on the battlefield pressures commanders to automate more of the kill chain, while the same capabilities pressure governments to expand surveillance.

AI's first major role in warfare isn't killer robots. It's a system that makes the decision to kill faster, more precise, and increasingly algorithmic.

Paul Itoi, TFTC: A Bitcoin Podcast:

- I think people anthropomorphize LLMs a lot.

- Because it's speaking language to you, because you can talk to it, you think that it's actually reasoning.

Entities Mentioned

Claudemodel
ObsidianProduct
PalantirCompany
Project MavenConcept

Source Intelligence

What each podcast actually said

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s IdentityMar 13

  • The first major battlefield role for AI is intelligence and targeting systems, not autonomous weapons, using data processing to shrink massive data haystacks for human operators.
  • U.S. military systems now integrate Claude into classified intelligence platforms to suggest hundreds of targets and issue precise coordinates for strikes, with a human giving final authorization.
  • Kevin Roose notes the integration of Claude into Palantir's Maven Smart System has compressed weeks of battle planning into real-time operational decision-making.
  • Casey Newton points to Israeli intelligence operations, like hacking Tehran's traffic cameras, as examples of data floods that AI systems are built to process for tracking troops and supplies.
  • The core value of battlefield AI is performing the dull, critical work of finding signal in noise for intelligence, logistics, and mission planning dashboards.
  • Kevin Roose argues that incidents like the strike on an Iranian elementary school preview future blame games where the first question will be whether a mistake was human or algorithmic.
  • Casey Newton warns that the surveillance and targeting logic perfected for foreign wars, such as in Iran, creates a direct blueprint for future domestic use, threatening civil liberties.