03-16-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI's Warfare Breakthrough is Memory, Not Thinking

Monday, March 16, 2026 · from 4 podcasts
  • AI is now a core military intelligence engine, processing sensor data to suggest and prioritize battlefield targets, moving kill-chain decisions toward automation.
  • The industry's foundational problems are memory and monetization: AI assistants cannot remember context between sessions, while the business model is designed to hook users before raising prices.
  • The open-source AI landscape is a broken ecosystem of hype and functionally useless tools, while corporate leaders like Sam Altman retreat from concrete promises into vague mysticism.

Artificial intelligence has reached its first major battlefield milestone, and it’s not a killer robot. On Hard Fork, hosts detailed how AI, specifically Anthropic’s Claude, is now integrated into the U.S. military's Palantir-built Maven Smart System. Its role is processing floods of battlefield data from traffic cameras and signals intelligence to suggest hundreds of targets and issue precise coordinates, compressing weeks of planning into real-time operations.

This shift moves the kill chain toward automation. A human still gives the final order, but the AI provides the target list and the confidence to act. Kevin Roose pointed to the recent strike on an Iranian elementary school as a preview of future blame games. When a strike goes horribly wrong, the first question will be whether the mistake was human or algorithmic.

Meanwhile, the foundational tools powering this revolution are broken in basic ways. On TFTC, Brian Murray and Paul Itoi dissected the core frustration of AI assistants: they have no memory. Murray described his daily ritual of reloading context just to get a coherent response. Itoi argued the industry’s obsession with scaling language models is a misdirection. The real breakthrough is not better prediction, but practical systems that can remember.

These technical limitations exist alongside a cynical business model. On Podcasting 2.0, Adam Curry recounted Sam Altman’s admission that the term ‘Artificial General Intelligence’ has lost meaning. The real plan, according to Curry, is to get developers hooked on the tool and then dramatically raise prices. Dave Jones described the local AI scene as a ‘pile of stinking bullcrap,’ filled with overhyped, useless agents and de-censored models.

The Presidio Bitcoin Jam highlighted another blind spot: open-source AI often remains centralized in practice, controlled by a few entities who manage the data, compute, and distribution. True decentralization is more aspiration than reality.

These threads converge on a single point. The AI being deployed for wartime targeting is the same brittle, amnesiac technology being sold to consumers and developers. Its power comes from processing speed and pattern matching, not understanding or memory. The ethical crisis isn't a future scenario about robots. It's happening now, as tools with profound limitations and addictive business models are handed the keys to the battlefield.

Sam Altman, Podcasting 2.0:

- The definition of AGI really matters. Some people would say we already got there.

- But in any case, that word has ceased to have much meaning.

Entities Mentioned

AardvarkProduct
Claudemodel
ObsidianProduct
OpenAItrending
PalantirCompany
Project MavenConcept
SpiralCompany

Source Intelligence

What each podcast actually said

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

Also from this episode:

Models (8)
  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.

Strategy's STRC Buying Spree, Open-Source AI Blind Spots, Bitcoin Stablecoins from Utexo & ArkMar 13

Also from this episode:

Lightning (1)
  • Spiral’s team hosted the first Builder event in New York at PubKey, signaling the expansion of grassroots Bitcoin development beyond Austin and into major financial centers.
Other (1)
  • The New York Builder event drew 50 attendees, reinforcing the growing momentum of in-person Bitcoin development meetups focused on open building, fast iteration, and stacking sats.
Nostr (1)
  • Steve from Presidio Bitcoin Jam credits Haley with the idea to launch the New York Builder event, noting the team has run monthly events for nine consecutive months in San Francisco.
Models (2)
  • Open-source AI models face centralization risks despite their decentralized appearance, as control over training data, compute resources, and distribution remains concentrated among a few well-funded entities.
  • Centralized bottlenecks in AI—data, compute, and distribution—undermine the promise of open-source decentralization, making true autonomy in AI development difficult to achieve.
Stablecoins (2)
  • Utxo and Ark introduced Bitcoin-native stablecoins that operate on Layer 2 solutions while maintaining settlement finality and censorship resistance on Bitcoin’s base layer.
  • Bitcoin-native stablecoins from Utxo and Ark aim to enable dollar-pegged utility without custodial intermediaries, offering a censorship-resistant alternative to Ethereum-style stablecoins.

Episode 253: Dirty FixMar 13

Also from this episode:

Models (7)
  • OpenAI CEO Sam Altman now claims the term 'Artificial General Intelligence' has 'ceased to have much meaning,' which Dave Jones and Adam Curry frame as a retreat from concrete promises to vague corporate mysticism.
  • Altman proposed a new, fuzzy metric for AGI based on when data centers might contain more cognitive capacity than the world, and estimated this could happen by late 2028, with 'huge error bars'.
  • According to Dave Jones, Sam Altman outlined the explicit AI model business model as getting developers hooked on a tool, charging an initial $200 per month, then dramatically raising prices to $4,000 or $5,000 per month.
  • Jones describes the model as pure platform lock-in driven by addiction, not by revolutionary intelligence, comparing it to treating users like commodities.
  • Dave Jones described his experiments with local AI tooling and open-source agents as a 'big pile of stinking bullcrap,' a scam ecosystem propped up by influencers selling pre-configured servers.
  • Jones criticized 'obliterated' models, which are attempts to remove censorship guardrails from others' work, and found local AI agents to be all chat with no practical utility.
  • After building a local AI setup and writing his own scripts, Jones concluded there was a lack of meaningful tasks for the system to perform, highlighting the gap between corporate hype and broken developer toolchains.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s IdentityMar 13

Also from this episode:

Models (4)
  • The first major battlefield role for AI is intelligence and targeting systems, not autonomous weapons, using data processing to shrink massive data haystacks for human operators.
  • U.S. military systems now integrate Claude into classified intelligence platforms to suggest hundreds of targets and issue precise coordinates for strikes, with a human giving final authorization.
  • Kevin Roose notes the integration of Claude into Palantir's Maven Smart System has compressed weeks of battle planning into real-time operational decision-making.
  • The core value of battlefield AI is performing the dull, critical work of finding signal in noise for intelligence, logistics, and mission planning dashboards.
War (3)
  • Casey Newton points to Israeli intelligence operations, like hacking Tehran's traffic cameras, as examples of data floods that AI systems are built to process for tracking troops and supplies.
  • Kevin Roose argues that incidents like the strike on an Iranian elementary school preview future blame games where the first question will be whether a mistake was human or algorithmic.
  • Casey Newton warns that the surveillance and targeting logic perfected for foreign wars, such as in Iran, creates a direct blueprint for future domestic use, threatening civil liberties.