03-16-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI's Dual Fronts: War, Addiction, Washington

Monday, March 16, 2026 · from 5 podcasts
  • AI now directly informs military targeting systems, shrinking battle planning to real-time operations and raising accountability concerns.
  • The AI industry's business model leverages user "addiction" and vague promises, mirroring broader dual-use risks beyond warfare.
  • Governments are responding with explicit actions, like Trump's order banning specific AI vendors for federal use, as military tech threatens to "come home."

AI has moved from the lab to the battlefield, directly informing missile strikes.

On Hard Fork, Casey Newton and Kevin Roose detailed how tools like Claude are integrated into U.S. military intelligence systems. AI now condenses weeks of battle planning into real-time operations, processing vast data to suggest targets and issue precise coordinates. Humans still give the final order, but the AI provides the target list.

This immediate military application carries broad geopolitical risks. On The Tucker Carlson Show, Colonel Douglas McGregor warned that conflicts where AI is deployed, like the current one in Iran, functionally close key chokepoints like the Strait of Hormuz, threatening the petrodollar. McGregor argued the lesson for nations observing is simple: get nuclear weapons or risk regime change.

This sophisticated military use contrasts sharply with the consumer AI industry's reality. Podcasting 2.0 highlighted OpenAI CEO Sam Altman's vague redefinition of AGI, while describing a business model based on getting developers "addicted" to tools and then dramatically raising prices. This mirrors another dual-use concern: AI's potential for societal manipulation and dependency.

Kevin Roose on Hard Fork warned that military tools perfected abroad often come home, creating blueprints for domestic surveillance. In response to these capabilities, Stacker News Live reported on former President Trump's order for federal agencies to immediately stop using specific AI vendors, signaling a direct governmental push for control.

The line between battlefield AI and everyday technology is blurring, demanding urgent clarity on accountability and regulation.

Kevin Roose, Hard Fork:

- The use of Maven and Claude has turned weeks-long battle planning into real-time operations.

- This is not just like a kind of tool that people in the military are using for handling like routine office work.

Entities Mentioned

Claudemodel
OpenAItrending
PalantirCompany
Project MavenConcept

Source Intelligence

What each podcast actually said

Episode 253: Dirty FixMar 13

  • OpenAI CEO Sam Altman now claims the term 'Artificial General Intelligence' has 'ceased to have much meaning,' which Dave Jones and Adam Curry frame as a retreat from concrete promises to vague corporate mysticism.
  • Altman proposed a new, fuzzy metric for AGI based on when data centers might contain more cognitive capacity than the world, and estimated this could happen by late 2028, with 'huge error bars'.
  • According to Dave Jones, Sam Altman outlined the explicit AI model business model as getting developers hooked on a tool, charging an initial $200 per month, then dramatically raising prices to $4,000 or $5,000 per month.
  • Jones describes the model as pure platform lock-in driven by addiction, not by revolutionary intelligence, comparing it to treating users like commodities.

Also from this episode:

Models (3)
  • Dave Jones described his experiments with local AI tooling and open-source agents as a 'big pile of stinking bullcrap,' a scam ecosystem propped up by influencers selling pre-configured servers.
  • Jones criticized 'obliterated' models, which are attempts to remove censorship guardrails from others' work, and found local AI agents to be all chat with no practical utility.
  • After building a local AI setup and writing his own scripts, Jones concluded there was a lack of meaningful tasks for the system to perform, highlighting the gap between corporate hype and broken developer toolchains.

Iran War, Oil Shock, Off Ramps, AI's Revenue Explosion and PR NightmareMar 13

  • The swift $30 drop in oil prices after President Trump hinted the Iran conflict would end soon revealed the market's dominant bet on a short conflict, not a prolonged war.
  • Brad Gerstner described the Trump doctrine as pragmatic destruction over democratic nation-building, focused on degrading threats to American security without the goal of spreading democracy.
  • A strategic release of 400 million barrels of petroleum is being used as a firebreak against sustained oil price spikes resulting from the conflict.
  • David Sacks warned that an escalatory faction could push for further conflict after seeing a degraded Iran, risking tit-for-tat attacks on Gulf energy infrastructure.
  • The market view assumes limited U.S. goals in the conflict: degrade threats, save face, and exit, rather than engaging in prolonged nation-building.

Also from this episode:

Markets (1)
  • Goldman Sachs updated its economic forecast to raise core PCE inflation expectations and lower GDP growth, accounting for both direct oil costs and the confidence shock from the conflict.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s IdentityMar 13

  • The first major battlefield role for AI is intelligence and targeting systems, not autonomous weapons, using data processing to shrink massive data haystacks for human operators.
  • U.S. military systems now integrate Claude into classified intelligence platforms to suggest hundreds of targets and issue precise coordinates for strikes, with a human giving final authorization.
  • Kevin Roose notes the integration of Claude into Palantir's Maven Smart System has compressed weeks of battle planning into real-time operational decision-making.
  • Casey Newton points to Israeli intelligence operations, like hacking Tehran's traffic cameras, as examples of data floods that AI systems are built to process for tracking troops and supplies.
  • The core value of battlefield AI is performing the dull, critical work of finding signal in noise for intelligence, logistics, and mission planning dashboards.
  • Kevin Roose argues that incidents like the strike on an Iranian elementary school preview future blame games where the first question will be whether a mistake was human or algorithmic.
  • Casey Newton warns that the surveillance and targeting logic perfected for foreign wars, such as in Iran, creates a direct blueprint for future domestic use, threatening civil liberties.

Newest War Developments: AI Bombings, Advice to Trump, and the Nuclear Agenda to Reset the WorldMar 9

  • Colonel Douglas McGregor says the Strait of Hormuz is functionally closed by the conflict, threatening global oil markets and supply chains with a systemic shock.
  • McGregor warns the war-driven closure of the Strait of Hormuz directly risks the stability of the petrodollar system.
  • Colonel Douglas McGregor argues governments and media platforms have locked down casualty footage, creating a blackout on the war's effects for many Americans.
  • McGregor frames the war as driven by two competing belief systems: explicitly religious factions seeking apocalyptic ends, and secular planners envisioning a technological world reset.
  • Colonel Douglas McGregor says the primary lesson for nations watching the conflict is that any country without nuclear weapons now faces regime change, a dynamic that will accelerate global nuclear proliferation.
  • Tucker Carlson questions whether automated targeting or autonomous AI weapons contributed to civilian deaths, citing the bombing of a girls' school in Iran as an example.
  • McGregor acknowledges that while professional military targeting processes exist, political pressure from leadership can warp campaigns into strategy-free, destructive bombing.
  • As a solution, McGregor suggests reaching out to neutral, influential actors like Indian Prime Minister Narendra Modi to mediate, arguing the U.S. must act with honor to maintain credibility.
  • Colonel Douglas McGregor argues that lying during wartime destroys a nation's credibility abroad and at home, making future diplomacy impossible.
  • McGregor's final systemic warning is that continued escalation could drive economic catastrophe, domestic instability, and global realignments that permanently weaken American influence.

SNL #214: Trump Orders Federal Agencies to ‘immediately cease’ Using AnthropicMar 9

Also from this episode:

Protocol (4)
  • The Bitcoin++ hackathon in Floripa focused on exploits, with Minesploit winning for its tool that tests vulnerabilities in Stratum mining protocol servers.
  • The hackathon results demonstrate a maturation phase for Bitcoin, where builders are actively stress-testing and probing the network's foundational protocols for weaknesses.
  • The parallel trends of rigorous security testing and rapid merchant adoption indicate Bitcoin is strengthening technically as its utility in commerce widens.
  • Alex Lewin notes the exploit-focused theme of the Bitcoin++ hackathon represents a shift towards proactive security research within the ecosystem.
Privacy (1)
  • The second-place hackathon project, Local Probe, uncovered a Firefox-specific vulnerability that allows websites to detect if a user is running a local Bitcoin node.
Adoption (2)
  • According to River Financial's annual report, global Bitcoin merchant adoption grew 74% in 2025, with over 4,000 new locations added in North America alone.
  • North America led merchant adoption growth with a 192% increase, while Africa followed with 116% growth, according to River Financial's data.