Price:

BUSINESS

AI agents dismantle enterprise software by automating rip‑and‑replace migrations

Friday, May 1, 2026 · from 5 podcasts, 6 episodes
  • AI-native migration tools collapse the 12-month enterprise software swap, enabling mass replacement of systems like Workday.
  • Vertical models now beat expensive general-purpose AI on cost and task performance, making API reliance a luxury.
  • Legacy firms report AI revenue through procurement credits, not by rebuilding their core agent-first architectures.

Enterprise software's foundational defensibility - the painful, year‑long migration - is now its greatest vulnerability. On the a16z Podcast, investor Joe Schmidt IV argues the friction that protected giants like Workday is dissolving. AI‑native tools can map and move complex relational databases in weeks, not the 12‑plus months that required millions in consulting fees. This reduces the 'kinetic energy' a CIO needs to pull the plug, turning brownfield replacement of hated systems into the dominant disruption opportunity.

Schmidt points to Workday's 97% gross dollar retention as evidence of deep user resentment, not satisfaction. He spent six‑and‑a‑half minutes finding his own compensation data, a friction tolerable only when switching was impossible. Startups aren't just building better portals; they're building agents that bypass the interface entirely to handle HR and IT dirty work autonomously. Incumbents are fighting this shift with executive comebacks and acquisitions, but their incentive is to protect seat‑based pricing and consulting ecosystems.

“When a system is ‘most important and least loved,’ it creates a massive opening for any competitor that can solve the user's immediate pain without the 20‑year‑old architectural baggage.”

- Joe Schmidt IV, The a16z Show

The technical enabler is the 'clawification' of AI. Nathaniel Whittemore notes that OpenClaw's release proved agent viability, and Nvidia’s enterprise‑grade Nemo Claw adds the security sandbox CIOs demand. This lets agents operate on the local desktop - the canvas holding a company's most valuable data - bridging cloud systems and local files. Perplexity’s Aravind Srinivas argues chat is for answers, but the computer is for workflows. Agents are evolving from external consultants into digital employees that run 24/7.

Concurrently, the economic rationale for relying on frontier AI APIs is collapsing. On his AI Daily Brief, Whittemore details how vertical models built with 'last‑mile' interaction data now outperform and undercut generalists. Intercom’s customer service model, Finn Apex, beats GPT‑4 and Opus 4.5 on resolution rate while being cheaper. Cursor’s coding model tops benchmarks. This isn't a return to expert systems; it's a new brute‑force scaling on proprietary user feedback loops that general labs cannot see.

“Once a team realizes they can match frontier performance with a fine‑tuned open model, the API becomes a luxury they no longer need.”

- Nathaniel Whittemore, The AI Daily Brief

Andrej Karpathy, speaking on the Sequoia Capital podcast, frames this as a shift to 'agentic engineering.' The vibe coding floor is raised, but professional oversight of stochastic AI outputs is the new ceiling. He critiques today's 'human‑first' software infrastructure - documentation that gives instructions rather than copy‑pasteable agent skills - as an obstacle course. Future infrastructure will prioritize sensors and actuators over user interfaces.

The legacy response is often what Schmidt calls 'procurement innovation.' Workday’s $400 million in AI‑related annual recurring revenue likely reflects sales of flex credits for minor extensions, not a re‑engineered, agent‑first core. Real transformation requires abandoning the old architecture, something incumbents are structurally slow to do.

Naval states the blunt conclusion: pure software is becoming uninvestable as a venture‑scale asset. The moat has shifted from code to high‑taste direction, hardware, network effects, or foundational models. For the enterprise, the immediate takeaway is that the exit door from decades‑old software contracts is now open, and a wave of AI‑native replacements is lining up to walk through it.

Source Intelligence

- Deep dive into what was said in the episodes

How Harness-as-a-Service Will Change AgentsApr 30

  • Nathaniel Whittemore argues OpenClaw’s release in Q1 2025 marked a 'second moment' for AI by proving agent viability and triggering widespread experimentation with agentic systems across businesses.
  • Nvidia CEO Jensen Huang stated every global software company now needs an OpenClaw strategy and introduced Nemo Claw, an enterprise-grade toolkit adding security guardrails and sandboxing to the OpenClaw project.
  • Kevin Simbach claims OpenClaw transformed agents from technical demos into accessible tools after the Opus 45 and 46 releases, demonstrating user demand for actionable work over simple chat.
  • The competitive response includes simplified forks like Nanobot and secure self-hosted versions like Ironclaw, while Notion launched custom agents and Perplexity rebuilt its product as a full agentic system called Computer.
  • Perplexity CEO Arvin Shrinabas argues the full AI agent potential requires a computer’s complete canvas to bridge local files and cloud systems, a design pattern echoed by Manis and Adaptive with their new desktop apps.
  • Manis introduced a desktop app called 'My Computer' for local task automation like organizing files and building Mac apps, citing the limitation of cloud-only agent sandboxes.
  • Adaptive launched 'Adaptive Computer', an always-on personal AI agent for automating business software tasks, featuring 'encoded memory' to learn and replicate user workflows.
  • Whittemore's Enterprise Claw program saw a roughly even split between participants choosing OpenClaw versus other agent platforms, indicating enterprise demand exists even before mature tooling.
  • The Wall Street Journal reports OpenAI is refocusing on enterprise productivity, with applications chief Fiji Simo stating the company must abandon 'side quests' like consumer apps to counter competitive threats.
  • OpenAI integrated sub-agents into Codeex, allowing parallel task delegation. Greg Brockman noted GPT-5.4's API adoption hit 5 trillion tokens daily within a week, reaching a $1 billion annualized net new revenue run rate.
Also from this episode: (1)

AI & Tech (1)

  • Critic Dwayne OnX argues OpenAI’s GPT-5.4 fails at UI design and lacks aesthetic judgment, requiring explicit design file inputs to produce acceptable work.

The AI Subsidy Era is OverApr 28

  • Intercom's new dedicated customer service model Finn Apex achieves the highest performance, speed, and cost metrics, beating GPT-4 and Opus 4.5, according to CEO Eoin Mac Caba.
  • Eoin Mac Caba claims Intercom's Apex model has a 2.8% higher resolution rate and a 65% reduction in hallucinations compared to other models, enabled by proprietary customer service interaction data.
  • Industry observers like Ben Avogi and Clem Delangue argue vertical SaaS companies with labeled interaction data have untapped fine-tuning assets, predicting a shift from API reliance to in-house open models.
  • Andrej Karpathy predicts AI model speciation, analogous to animal kingdom diversity, where smaller, task-specific models with a cognitive core will thrive over a single general oracle.
  • Richard Sutton, on the Dwarkesh podcast, framed learning from experience as the next phase of the bitter lesson, which aligns with the post-training from real interaction data seen with Apex and Composer 2.
Also from this episode: (4)

AI & Tech (3)

  • The 'bitter lesson' from Rich Sutton argues that general methods leveraging computation beat human-designed domain-specific approaches every time. This pattern held with Bloomberg's specialized finance model being surpassed by generalist LLMs.
  • A new hypothesis challenges the bitter lesson, suggesting high-quality 'last-mile' user interaction data can make vertical models outperform frontier models through targeted post-training, not full pretraining.
  • Cursor's Composer 2 model, based on an open-source Kimmy 2.5 with extra reinforcement learning, reportedly beats Opus 4.6 on coding benchmarks while being cheaper, showing post-training's potential.

Models (1)

  • Nathaniel Whittemore argues frontier AI labs face classic disruption and may need to build cheaper specialized models themselves, potentially through data partnerships or acquiring companies with proprietary evals.

Mastering AI Video Marketing w/ Magnific CEO Joaquín Cuenca Abela | AI BasicsApr 30

  • Joaquin Cuenca Abela demonstrates that Magnific can produce a cinematic, post-apocalyptic launch video concept from scratch in 24 hours using only text prompts, character creation, and logo modification.
  • Magnific integrates third-party state-of-the-art models, including from Google, alongside proprietary upscaling and skin-enhancer models, to provide users with the best available creative output.
  • Jason Calacanis estimates a professional 5-minute launch video could cost $50k-$100k, while Joaquin states Magnific's generation cost is roughly 10 cents per second, or about $1 per second factoring in multiple attempts.
  • Joaquin says Magnific's customer base spans from Hollywood studios and large marketing departments to small creative teams, with exponential growth in Hollywood adoption for production and pre-visualization.
  • Joaquin argues AI video tools raise both the creative ceiling and floor, enabling projects that were previously too expensive to get greenlit while also empowering smaller teams and individual creators.
Also from this episode: (3)

AI & Tech (3)

  • Gal Gadot told Jason Calacanis that AI tools allow film productions to cut costs by two-thirds, letting actors focus on performance and enabling a potential Cambrian explosion of new content.
  • Joaquin believes AI will match the creativity of some humans but cannot replicate human individuality, predicting increased demand for people who can inject their unique experiences and storytelling into projects.
  • Joaquin notes that while AI can already localize static ads across languages and cultural details, generating hundreds of localized video variants remains error-prone and requires better steering and validation systems.

Workday’s Last Workday? AI and the Future of Enterprise SoftwareApr 30

  • Joe Schmidt argues the core user experience of Workday is broken, citing his own six-and-a-half-minute struggle to find compensation data as evidence that no employee enjoys interacting with the portal.
  • Workday's 97% gross dollar retention rate demonstrates the extreme difficulty of displacing entrenched enterprise systems, a defensibility built during the last major platform shift from on-premise to cloud.
  • Schmidt contends that current enterprise AI revenue metrics, like Workday's $400 million AI ARR, are often procurement innovations rather than fundamental product shifts, lacking true agentic experiences.
  • The new platform shift enabling disruption is AI-native architecture, which for the first time allows founders to promise CHROs and CIOs a fundamentally different core system that changes how work is done.
  • An AI-native competitor must enable deployment in 30 to 60 days, a drastic reduction from the historical 12-plus month implementations that required expensive consultants.
  • Schmidt identifies six critical properties for an AI-native Workday successor: rapid deployment, workbench-native customization, agent-first interaction, open APIs, enterprise-grade security, and global compliance readiness.
  • The disruption opportunity is in brownfield replacement, not greenfield sales, as enterprises now have kinetic energy to rip and replace systems where employees are effectively hostages.
  • HR software may become the beacon for mass AI adoption in the enterprise, as its transformation will signal when AI moves beyond early adopters in major cities to broader organizational takeoff.
  • Agent-first HR systems will be critical for permissioning and identity management as more AI agents perform work on behalf of humans, a growing concern for CIOs.
  • Incumbents like Workday are actively fighting the shift, evidenced by executive comebacks, layoffs, and acquisitions like Hired to fend off new competitors.
Sequoia Capital
Sequoia Capital

Sequoia Capital

Andrej Karpathy: From Vibe Coding to Agentic EngineeringApr 29

  • Andrej Karpathy defines software 1.0 as explicit rules, software 2.0 as learned weights, and software 3.0 as programming via prompting and the LLM context window as a lever over an interpreter.
  • Karpathy states that OpenClaw's installation exemplifies software 3.0. Instead of a complex bash script, you copy-paste instructions for an agent, which uses its intelligence to adapt to the environment and debug issues.
  • Karpathy says his MenuGen app, which uses OCR and an image generator to illustrate menus, is rendered obsolete by software 3.0. The raw approach is to give a menu photo to Gemini with NanoBanan and get a directly annotated image.
  • Karpathy argues LLMs enable new applications, like automated knowledge base creation from documents, which couldn't exist before because there was no code to reframe unstructured data.
  • Karpathy posits that future computing could invert the current architecture. Neural networks would become the host process, with classical CPUs serving as co-processors for deterministic tasks.
  • Karpathy notes that GPT-4's chess capability improved significantly from GPT-3.5 not just from scaling, but because a large amount of chess data was added to its pre-training set.
  • Karpathy distinguishes vibe coding, which raises the floor for all programmers, from agentic engineering, which preserves professional software quality standards while using agents to accelerate development.
  • Karpathy suggests hiring for agentic engineering should involve a large, practical project like building a secure Twitter clone and then stress-testing it with adversarial agents, not puzzle-solving.
  • Karpathy argues that as agents handle more implementation, human skills like aesthetic judgment, taste, system design, and oversight become more valuable, not less.
  • Karpathy describes current infrastructure as built for humans, not agents. His pet peeve is documentation that tells a human what to do instead of providing text to copy-paste directly to an agent.
Also from this episode: (3)

Models (2)

  • Karpathy's verifiability framework holds that LLMs excel in domains where outputs can be verified, like code and math, because frontier labs use reinforcement learning with verification rewards during training.
  • Karpathy cites the 'car wash' problem as current jaggedness: state-of-the-art models can refactor a 100k-line codebase but incorrectly advise walking 50 meters to a car wash.

AI & Tech (1)

  • Karpathy endorses a tweet stating 'you can outsource your thinking but you can't outsource your understanding.' He sees LLM knowledge bases as tools to enhance, not replace, human understanding.
Naval
Naval

Naval

On Vibe CodingApr 29

  • In December 2025, coding agents reached an inflection point with Claude Opus 4.5, making them feel like fast, free junior programmers that can solve thorny problems.
  • These agents operate within a Unix shell environment, giving them native access to Unix commands, file systems, cron jobs, and spawning tasks. This makes them effective for text-based command execution.
  • Having multiple AI agents review code in a pull request council leads to groupthink. Naval finds they rarely contradict a user's leading opinion because they lack theory of mind and are designed to please.
  • Naval built a bug reporting system where Claude automatically reviews reports every 24 hours and proposes fixes. This reduces his role to final gatekeeper, previewing a future of agent-driven, user-collaborative software maintenance.
Also from this episode: (7)

Coding (2)

  • Naval built a personal app store that lets him oneshot custom apps like a workout tracker, which then appear on his phone. He notes Apple's device keying prevents wide distribution but allows apps for friends and family.
  • Vibe coding expands software creation from 0.1% of the population to maybe 3%, Naval estimates. It requires a clear vision and basic computer understanding, but eliminates team compromises and activation energy.

VC (1)

  • Naval declares pure software is uninvestable for venture capital now because it can be hacked together instantly and agents will soon build scalable versions. He says VC must look to hardware, network effects, and AI model training.

AI & Tech (4)

  • Coding is easier to train AI on than creative writing because it offers vast data and easy verification through compilation and tests. Domains with sparse data or subjective quality, like creative writing, remain human opportunities.
  • State-of-the-art context windows are about one million tokens, but as codebases grow, models lose the plot. This forces the human operator to guide architecture and debugging, preventing hacks and preserving features.
  • Naval uses different AI models for different strengths: Claude for visual artifacts and meeting his level, ChatGPT as the all-around OG, Gemini for search and YouTube access, and Grok for unneutered truth and technical problems.
  • Naval argues conversational AI agents will make dedicated phone interfaces obsolete, eroding Apple's software advantage. He says Apple's reliance on Google's Gemini for AI is a strategic mistake that will cap its long-term growth and market value.