03-20-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI agents and robots start doing real work

Friday, March 20, 2026 · from 7 podcasts, 11 episodes
  • AI is moving from language models to physical, task-oriented agents that execute workflows and manipulate the real world.
  • The industry faces a widening chasm between soaring technical capability and public anxiety over jobs and control.
  • Leaders from Nvidia, Tesla, and startups predict economic hypergrowth driven by an imminent explosion in robotic and agentic productivity.

AI is growing hands.

For years, artificial intelligence excelled at conversation. Now, it’s shifting to action. Agents, once niche experiments, are becoming daily tools that organize files, review code, and manage email. Physical robotics, led by Tesla’s Optimus, is moving from demo to production line.

Jensen Huang, All-In with Chamath, Jason, Sacks & Friedberg:

- Many of our strategies are presented in broad daylight at GTC years in advance of when we do it.

- 2.5 years ago, I introduced the operating system of the AI factory and it's called Dynamo.

This transition from chat to execution is triggering an infrastructure race. On the All-In podcast, Nvidia CEO Jensen Huang framed his company’s evolution from a GPU supplier to an 'AI factory' architect, building systems for diverse agentic workloads. Startups are racing to build the connective tissue. Tempo launched a Machine Payments Protocol so agents can transact. Travis Kalanick’s Atoms is treating manufacturing and logistics as computing resources for an 'atoms-based computer.'

Public sentiment, however, is souring. Nathaniel Whittemore on The AI Daily Brief argues the arrival of workable agents has caused a more intense 'freakout' than ChatGPT’s debut. The industry’s terrible messaging - 'this will take your job' - collides with visible corporate layoffs and Wall Street’s massive bets.

Technical leaders see a different trajectory: hypergrowth. Elon Musk told Moonshots that recursive self-improvement is already underway and predicts the economy will be ten times larger in a decade, driven by AI and robotics. He says Optimus 3 production starts this summer. Paradromics is weeks away from human trials of a brain-computer interface to restore speech, a technology that could eventually let minds control machines directly.

Georgios Konstantopoulos, Bankless:

- And I think the best way to think about it is it is like the payment form for agents.

The gap isn’t just public relations. It’s capability. As agents move from demos to daily use, a core technical flaw persists: they have no memory. Brian Murray and Paul Itoi discussed on TFTC how AI assistants forget everything between sessions, forcing users to reload context constantly. The next leap won’t be smarter models, but systems that remember.

The frontier is no longer what AI can say, but what it can do - and how soon society adapts to the hands it’s growing.

Entities Mentioned

AnthropicCompany
Claudemodel
Claude CodeProduct
GrokProduct
NvidiaCompany
ObsidianProduct
OpenClawframework
OptimusProduct
PerplexityCompany
Perplexity ComputerConcept
SlackProduct
TeslaCompany
WaymoCompany

Source Intelligence

What each podcast actually said

Jensen Huang LIVE: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR CrisisMar 19

  • Jensen Huang states Nvidia has evolved from a GPU company into an AI factory company, building integrated systems like its Dynamo architecture.
  • Huang defines three core future computing systems: AI training, simulation via Omniverse, and edge robotics encompassing everything from self-driving cars to toys.
  • Jensen Huang sees physical AI, digital biology, and agriculture as trillion-dollar industries just beginning their inflection points, with biology nearing its own 'ChatGPT moment.'

Also from this episode:

Models (4)
  • Nvidia's Dynamo architecture is a heterogenous computing system that coordinates GPUs, CPUs, switches, and storage processors for specialized parts of the AI inference pipeline.
  • Huang identifies inference, not training, as the new computational bottleneck, driven by the shift from single models to complex multi-agent systems.
  • Nvidia's Vera Rubin data center platform expands its total addressable market by 33-50% by being designed to handle diverse agentic workloads.
  • Huang dismisses the threat of cheaper custom ASICs, arguing a $50B Nvidia inference factory will produce lower-cost tokens than a competitor's $30B build due to superior throughput and efficiency.
Enterprise (1)
  • Nvidia's strategy positions it not just as a chip vendor but as the foundational operating system for a world where all infrastructure, from warehouses to base stations, becomes part of the AI fabric.

Travis Kalanick & Michael Dell Live from Austin, TexasMar 17

  • Travis Kalanick's new company Atoms treats manufacturing, real estate, and logistics as the core resources of an 'atoms-based computer' analogous to the CPU, storage, and network of a traditional computer.
  • Atoms' initial 'food computer' project automates kitchens and delivery logistics with the goal of making prepared meals as cheap as grocery store staples, a shift Kalanick compares to Uber's impact on cars.
  • Beyond food, Atoms is expanding its infrastructure into mining automation and robotics wheelbases, and is acquiring San Francisco-based mining automation firm Pronto.
  • Travis Kalanick asserts that automation enables mining in previously inaccessible locations by reducing labor requirements and safety risks.
  • Kalanick sees Tesla as the dominant 'Google of this era' in physical automation, forcing other startups to first ask if Tesla will execute their idea instead.
  • On autonomous vehicles, Travis Kalanick believes Waymo leads in technology but struggles with manufacturing and scale, while Tesla faces fundamental scientific challenges that could be solved 'tomorrow or in five years.'
  • Kalanick states the breakthrough for autonomous vehicles will be a 'ChatGPT moment for vision,' a sudden leap in AI-powered visual understanding.

Also from this episode:

Startups (1)
  • Kalanick argues the food industry lacks the high-capacity infrastructure needed for e-commerce-scale production, a gap that Atoms aims to fill by building new physical systems from the ground up.

Tempo Mainnet: The Race to Agentic CommerceMar 19

Also from this episode:

Models (3)
  • Tempo's mainnet launch pivots its narrative from stablecoin and cross-border payments to a focus on its Machine Payments Protocol (MPP) for AI agents.
  • The Machine Payments Protocol (MPP) is designed as a payment-method agnostic standard for machine-to-machine transactions, competing directly with Coinbase's X.402 protocol.
  • Tempo argues its MPP is a more flexible standard for agentic commerce than existing alternatives like Coinbase's X.402.
Adoption (1)
  • The protocol already supports payment extensions for Stripe, Visa cards, and Bitcoin Lightning, aiming to function as a universal payment form for autonomous agents.

How to Use Agent SkillsMar 18

  • Nathaniel Whittemore explains that agent skills solve the context bloat problem by allowing dynamic, just-in-time loading of expertise, rather than loading all instructions upfront.
  • Anthropic's Tariq describes the core principle as progressive disclosure, where agents start with a skill's name and description and pull deeper layers only if relevant.
  • Anthropic identifies nine core categories for agent skills, with verification and code review emerging as the highest-ROI categories.
  • Tariq clarifies that skills are not just markdown files but are folders that bundle scripts, credentials, assets, and data, turning static instructions into executable, modular knowledge.

Also from this episode:

Models (3)
  • A specific verification tactic developed by Anthropic involves having Claude record a video of its output to provide transparent auditability of what is being tested.
  • Nathaniel Whittemore discusses new tooling like Skill Creator, which brings testing and benchmarking to non-engineers by running A/B tests and scoring performance.
  • Skill Creator also rewrites skill descriptions to trigger more reliably, addressing one of the three biggest pain points in skill adoption.

The Race to Put AI Agents EverywhereMar 17

  • Nathaniel Whittemore reports that OpenClaw's launch demonstrated a market preference for AI that executes real work over another chat interface, triggering a rush to build enterprise and desktop agent clones.
  • The competitive landscape has split, with one front focused on security via sandboxed offerings like Nvidia's Nemo Claw, which adds policy-based guardrails to address enterprise safety concerns.
  • Nvidia's Nemo Claw is praised by commentators for its isolated sandboxes, a move seen as potentially making AI agents viable for corporate adoption.
  • A second competitive front champions deep local desktop integration, with companies like Mannis launching 'My Computer,' an agent that runs locally to organize files, rename documents, and even build Swift applications.
  • Adaptive introduced 'Adaptive Computer,' an always-on personal agent designed to learn workflows, such as uploading a hardware store's spreadsheet directly to Square.
  • Perplexity has reimagined its product as 'Perplexity Computer,' a full problem-solving system, reflecting a philosophy that the chat UI is a bottleneck for agent potential.
  • Perplexity's CEO argues the true potential of AI agents requires access to the full canvas of a user's computer, bridging local files, cloud systems, and applications.
  • The stated endgame is an agentic workforce that uses more software than humans, automating entire business workflows from end to end.
  • Kevin Simbach notes that before OpenClaw, AI agents were mostly technical experiments producing little of substance, often just 'timeline sllo.'
  • Simback states that after OpenClaw and with models like Opus 45 and 46, agents became accessible, always-on tools 'just a telegram message away' that kickstarted a new generation of digital opportunities.

A Guy Used AI to Cure His Dog's Cancer*Mar 16

  • Nathaniel Whittemore says generative AI's 'second moment' is underway, characterized by workable agentic systems, and is causing a more intense public reaction than the initial ChatGPT launch.

Also from this episode:

Models (3)
  • Six factors are escalating public anxiety: a leap in capabilities from chatbots to multi-agent systems, a user base that has grown from millions to billions, immediate and visible high-stakes economic activity like Anthropic's $19 billion run rate, companies citing AI as a reason for layoffs, the technology's collision with global political volatility, and what Whittemore calls a catastrophic failure of industry messaging.
  • The reaction to Andrej Karpathy's data visualization project demonstrated the chasm between perception and capability. His simple 'job exposure' map was misinterpreted by many on Twitter as a definitive diagnosis, not a rough predictive tool, leading to widespread declarations that entire professions were doomed.
  • Karpathy clarified his project was a two-hour exploration using LLM estimates, not rigorous economic predictions. Economists noted that job exposure to automation can sometimes lead to increased hiring in those fields, but this nuance was lost in the public discourse.
Society (2)
  • Whittemore argues the AI industry's core message has failed, essentially telling the public that a miracle is coming to take their job, and hoping they'll be grateful for potential handouts or the promise of better jobs in the future.
  • Public sentiment is growing increasingly negative, fueled by poor industry communication and a flood of sensationalized headlines about job displacement, widening the gap between perception and practical reality.

The Coolest Agents I've Built So FarMar 14

  • Nathaniel Whittemore's experiment testing 16 personal AI projects finds the most useful agents solve specific, recurring productivity problems like email triage and work recommendations.
  • The most successful agent, Holmes, operates in Slack and the web, conducting interviews to build persistent case files on individual users for continuous, personalized workflow suggestions.
  • Nathaniel Whittemore argues agent utility depends on persistence and learning from ongoing user interaction, not delivering a one time static report.
  • An OpenClaw agent designed for vibe coding from a gym was rendered obsolete by the remote control capabilities of newer tools like Claude Code.
  • A Perplexity built AI research library effectively aggregated studies but failed because it lacked generative search, forcing users to browse data rather than query it naturally.
  • Nathaniel Whittemore identifies generative search, letting users explore data with natural queries, as a critical missing feature in current agent development.
  • Whittemore is prototyping an agent named Chucky that serves as an interactive professional portfolio, allowing potential clients or employers to conversationally explore a creator's past work.
  • Nathaniel Whittemore suggests the experiment points to a future where conversational agent ambassadors could replace traditional resumes for AI builders.
  • Technical complexity does not guarantee an agent's adoption, with the field maturing from novelty to utility based on clear problem solving and continuous learning.

AI, Supply Chains, and the Future of Economic PowerMar 18

  • The speaker on the a16z Show argued that visual spatial intelligence, or AI that understands 3D space and time, is as fundamental a technological leap as language.
  • Unlocking spatial intelligence is seen as the key to new applications, from transforming digital experiences into interactive 3D worlds to enabling physical robotics.
  • The a16z Show framed spatial intelligence as the foundational capacity for machines to perceive, reason, and act within three-dimensional space and time, understanding object interactions.
  • The end goal of developing spatial intelligence, per the a16z Show, is creating machines that can build and operate in the physical world, not just analyze data.
  • Advancements in spatial AI are positioned to translate the arc of biological intelligence, the ability to move and interact with the physical world, into technology.

Also from this episode:

Models (2)
  • A convergence of compute power, deeper data understanding, and algorithmic advances has created a moment where a major investment in spatial intelligence is viable, according to the a16z Show speaker.
  • The a16z Show presenter stated that this technology moves beyond niche computer vision to a foundational capacity for reasoning about space, time, and interaction.

Elon Musk: Optimus 3 Is Coming, Recursive Self-Improvement Is Already Here, and the Singularity | #239Mar 17

  • Tesla's Optimus 3 is in its final stages, with initial production slated to begin this summer and ramping to high volume by summer 2025.
  • Musk claims no other robot demo he's seen comes close to Optimus 3's capabilities and calls it the most advanced robot in the world.
  • Tesla is building a dedicated 10-million-square-foot factory for Optimus production.
  • Musk says productivity at Tesla will become 'nutty high' due to robotics, but he foresees increasing headcount rather than layoffs.

Also from this episode:

Models (6)
  • Elon Musk predicts the economy will grow tenfold within a decade, a 'comfortable prediction' driven by AI and robotics, assuming no major disruptions like a world war.
  • Musk states that AI progress is on overlapping S-curves and recursive self-improvement has been underway for a while, arguing that xAI's Grok is currently behind competitors in coding but expects to catch up by mid-year.
  • Musk believes full automation of AI development, removing humans from the loop, could arrive by the end of this year and certainly no later than next, triggering a hard takeoff.
  • Musk frames the AI economy's scale in terms of energy, stating that an AI system using a million times more electricity than all of civilization today would still only capture a millionth of the sun's output.
  • Musk claims the intelligence hosted by such a scaled AI economy would be many orders of magnitude beyond human comprehension.
  • Musk puts the probability of a great outcome from this AI and robotics transition at 80% or higher, but warns against complacency, acknowledging a range of possible futures.
Macro (1)
  • Musk sees the path forward involving deflation and abundance driven by AI and robotics, leading to what he calls universal high income.

Are Brain-Computer Interfaces Actually Ready for Humans?Mar 16

  • The dime-sized Paradromics device sits on the brain's surface, using micro-wires thinner than hair to record electrical activity from large populations of neurons.
  • The same neural recording technology underpinning speech restoration is already in clinical trials for controlling robotic arms, using AI to predict sequences of physical movements.

Also from this episode:

Brain (5)
  • Paradromics CEO Matt Angel says the company will implant its first human patients within weeks, aiming to decode neural signals from the motor cortex to restore speech.
  • Matt Angel describes the recording process as dropping microphones into a neuronal cocktail party, capturing noisy signals that require AI decoding.
  • Paradromics has tested the sensory reconstruction concept in sheep, decoding what sounds the animal hears directly from its auditory cortex.
  • Matt Angel claims the principle could extend beyond motor control to reconstruct sensory experiences, including what a person is seeing or dreaming.
  • Paradromics received an FDA investigational device exemption last fall, allowing the imminent human surgeries which will generate real-world data on thought decoding.
Models (1)
  • Large language models clean up the neural noise to generate text, accelerating the decoder training which relies on paired data of attempted speech and corresponding brain activity.

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

Also from this episode:

Models (8)
  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.