03-17-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI's new battlefield isn't intelligence, it's target selection

Tuesday, March 17, 2026 · from 5 podcasts
  • The U.S. military is already deploying AI, specifically Claude, to process battlefield data and suggest targets, compressing weeks of planning into real-time operations.
  • The AI industry's internal struggles over compute bottlenecks, corporate vagueness, and broken tools reveal a disconnect between boardroom hype and real-world functionality.
  • Practical integration and memory are emerging as the critical bottlenecks for both military and civilian AI, more so than raw model capability.

AI is no longer theoretical. It's processing live battlefield data and ranking which targets to hit first.

On Hard Fork, hosts Casey Newton and Kevin Roose detailed the operational shift where Claude is integrated into the U.S. military's Palantir-built Maven Smart System. Its primary role isn't pulling a trigger but processing floods of intelligence, from hacked traffic cameras to eavesdropped communications, to generate target lists and precise coordinates for strikes. A human still gives the final order, but the AI provides the confidence to act.

This real-world deployment starkly contrasts with the messy, overhyped state of the AI industry. Podcasting 2.0 dissected Sam Altman's evasive definitions of AGI, calling it corporate mysticism, and highlighted the explicit business model of hooking users before dramatically raising prices. Dylan Patel explained on the Dwarkesh Podcast that this high-stakes war is also a physical resource war. Big Tech plans capex years ahead, while AI labs like Anthropic are now scrambling for last-minute, overpriced compute because they were too financially conservative early on.

The industry's focus on scaling raw language models may be a misdirection for both warfare and civilian use. On TFTC, Paul Itoi argued people anthropomorphize language models because they speak our language, but they are statistical engines, not reasoning entities. The real breakthrough needed isn't more parameters but memory. Users, from military planners to podcast producers, are forced to manually reload context into forgetful AI systems, acting as constant managers instead of leveraging persistent intelligence.

The pressure to defer to AI systems is mounting where the stakes are highest. The recent Iranian school strike, while not directly blamed on AI, is a preview of future blame games when a strike goes wrong. According to Kevin Roose, the integration of tools like Claude has turned weeks-long battle planning into real-time operations.

The tools perfected for foreign wars, and the broken tools frustrating developers, are two sides of the same coin. Both reveal an industry sprinting towards integration before solving the fundamental problems of context and reliability. The battlefield is just the highest-consequence test lab.

Kevin Roose, Hard Fork:

- The use of Maven and Claude has turned weeks-long battle planning into real-time operations.

- This is not just like a kind of tool that people in the military are using for handling like routine office work.

Entities Mentioned

AardvarkProduct
AnthropicCompany
Claudemodel
ObsidianProduct
OpenAItrending
PalantirCompany
Project MavenConcept
SpiralCompany

Source Intelligence

What each podcast actually said

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.

Strategy's STRC Buying Spree, Open-Source AI Blind Spots, Bitcoin Stablecoins from Utexo & ArkMar 13

  • Open-source AI models face centralization risks despite their decentralized appearance, as control over training data, compute resources, and distribution remains concentrated among a few well-funded entities.
  • Centralized bottlenecks in AI—data, compute, and distribution—undermine the promise of open-source decentralization, making true autonomy in AI development difficult to achieve.

Also from this episode:

Lightning (1)
  • Spiral’s team hosted the first Builder event in New York at PubKey, signaling the expansion of grassroots Bitcoin development beyond Austin and into major financial centers.
Other (1)
  • The New York Builder event drew 50 attendees, reinforcing the growing momentum of in-person Bitcoin development meetups focused on open building, fast iteration, and stacking sats.
Nostr (1)
  • Steve from Presidio Bitcoin Jam credits Haley with the idea to launch the New York Builder event, noting the team has run monthly events for nine consecutive months in San Francisco.
Stablecoins (2)
  • Utxo and Ark introduced Bitcoin-native stablecoins that operate on Layer 2 solutions while maintaining settlement finality and censorship resistance on Bitcoin’s base layer.
  • Bitcoin-native stablecoins from Utxo and Ark aim to enable dollar-pegged utility without custodial intermediaries, offering a censorship-resistant alternative to Ethereum-style stablecoins.
Philosophy (1)
  • The ethos of Bitcoin builders—autonomy, transparency, and permissionless innovation—is now influencing adjacent domains like AI and financial infrastructure, challenging centralized defaults.

Episode 253: Dirty FixMar 13

  • OpenAI CEO Sam Altman now claims the term 'Artificial General Intelligence' has 'ceased to have much meaning,' which Dave Jones and Adam Curry frame as a retreat from concrete promises to vague corporate mysticism.
  • Altman proposed a new, fuzzy metric for AGI based on when data centers might contain more cognitive capacity than the world, and estimated this could happen by late 2028, with 'huge error bars'.
  • According to Dave Jones, Sam Altman outlined the explicit AI model business model as getting developers hooked on a tool, charging an initial $200 per month, then dramatically raising prices to $4,000 or $5,000 per month.
  • Jones describes the model as pure platform lock-in driven by addiction, not by revolutionary intelligence, comparing it to treating users like commodities.
  • Dave Jones described his experiments with local AI tooling and open-source agents as a 'big pile of stinking bullcrap,' a scam ecosystem propped up by influencers selling pre-configured servers.
  • Jones criticized 'obliterated' models, which are attempts to remove censorship guardrails from others' work, and found local AI agents to be all chat with no practical utility.
  • After building a local AI setup and writing his own scripts, Jones concluded there was a lack of meaningful tasks for the system to perform, highlighting the gap between corporate hype and broken developer toolchains.

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI computeMar 13

  • Dylan Patel of SemiAnalysis explains that the $600 billion in AI-related capital expenditure forecasted for 2024 is not for immediate use, but funds multi-year infrastructure like power capacity for 2028 and data center construction for 2027.
  • Anthropic's explosive revenue growth now requires it to find roughly $40 billion in annual compute spend, which translates to needing about four gigawatts of new inference capacity this year alone.
  • Patel says OpenAI secured a decisive first-mover advantage by signing aggressive, massive deals with cloud providers early, locking in compute capacity at cheaper rates and better terms despite skepticism about its ability to pay.
  • Anthropic's initially conservative financial strategy, which prioritized avoiding bankruptcy risk, has left it exposed, forcing it to chase last-minute compute deals in a tight market.
  • In the current scramble for AI chips, labs are paying significant premiums, such as $2.40 per hour for an Nvidia H100, a markup over the estimated $1.40 build cost.
  • To secure necessary compute, AI labs like Anthropic are now forced to turn to lower-quality or newer infrastructure providers they had previously avoided.
  • The core strategic divergence is that OpenAI's early, aggressive bets gave it an advantage in a physical resource war, while Anthropic's later revenue success forces it into a costly scramble for a depreciating asset.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s IdentityMar 13

  • The first major battlefield role for AI is intelligence and targeting systems, not autonomous weapons, using data processing to shrink massive data haystacks for human operators.
  • U.S. military systems now integrate Claude into classified intelligence platforms to suggest hundreds of targets and issue precise coordinates for strikes, with a human giving final authorization.
  • Kevin Roose notes the integration of Claude into Palantir's Maven Smart System has compressed weeks of battle planning into real-time operational decision-making.
  • Casey Newton points to Israeli intelligence operations, like hacking Tehran's traffic cameras, as examples of data floods that AI systems are built to process for tracking troops and supplies.
  • The core value of battlefield AI is performing the dull, critical work of finding signal in noise for intelligence, logistics, and mission planning dashboards.
  • Kevin Roose argues that incidents like the strike on an Iranian elementary school preview future blame games where the first question will be whether a mistake was human or algorithmic.
  • Casey Newton warns that the surveillance and targeting logic perfected for foreign wars, such as in Iran, creates a direct blueprint for future domestic use, threatening civil liberties.