Price:

AI & TECH

Coding agents shift economic power from software moats to infrastructure owners

Friday, May 1, 2026 · from 6 podcasts
  • AI agents render pure software features a commodity, erasing traditional tech moats and making them uninvestable.
  • New business cohorts show explosive productivity, a phase transition Stripe data pins to late 2025.
  • The value shifts to physical infrastructure, high-speed rails for agents, and human oversight of AI-generated 'slop'.

A permanent shift in economic power is underway, driven not by AI hype but by its tangible, disruptive outputs. Stripe’s internal data reveals the 2025 cohort of new businesses is larger and more productive than any before it. Patrick Collison argues this isn't a gradual improvement but a phase transition, suggesting Q1 2026 may be remembered as the start of an economic singularity.

This acceleration is powered by the collapse of software's activation energy. Naval states that with models like Claude Opus 4.5, coding agents now act as fast, free junior programmers. This 'vibe coding' expands creation from 0.1% to roughly 3% of the population, turning pure software into a hackable commodity. Naval concludes this makes pure software uninvestable for venture capital, which must now hunt for hardware, network effects, or foundational models.

"Pure software is uninvestable for venture capital now because it can be hacked together instantly and agents will soon build scalable versions."

- Naval, Naval

The explosion of agent-generated code is creating a geometric complexity problem. A senior engineer from The Pragmatic Engineer notes that agents, lacking the pain of maintenance, say yes to every prompt, prioritizing recovery over proper failure. This builds 'vibe slop' - code that looks correct but lacks structural integrity, creating long-term maintenance nightmares and pushing open-source projects toward chaos.

Andrej Karpathy frames the professional response as 'agentic engineering.' The modern programmer is a director managing a fleet of stochastic 'intern entities.' The goal is to coordinate them to maintain a professional bar for security and resilience that vibes alone cannot guarantee. As agents handle implementation, human skills in aesthetic judgment, system design, and oversight become more valuable.

"Vibe coding raises the floor, but agentic engineering preserves the professional quality ceiling."

- Andrej Karpathy, Sequoia Capital

The endgame requires entirely new infrastructure. John Collison argues that for agentic commerce to reach its potential, the world needs blockchains capable of billions of transactions per second - a volume no legacy payment rail can handle. This is the missing link between AI and stablecoins, moving past agents 'hacking' human interfaces.

Companies that control the underlying physical stack are positioning to capture this new value. ARK Invest's analysis of SpaceX's potential $60 billion play for Cursor illustrates the strategy: owning the application layer monetizes massive energy and compute investments. In an era of looming compute scarcity, owning infrastructure separates winners from renters. The software model itself is flipping from a mass-produced product to a bespoke service, 'cooked fresh' with real inference costs, as Patrick Collison described it. The monopoly power of big software incumbents is under threat, but a new oligopoly of infrastructure owners is rising to take its place.

Source Intelligence

- Deep dive into what was said in the episodes

Sequoia Capital
Sequoia Capital

Sequoia Capital

Andrej Karpathy: From Vibe Coding to Agentic EngineeringApr 29

  • Andrej Karpathy defines software 1.0 as explicit rules, software 2.0 as learned weights, and software 3.0 as programming via prompting and the LLM context window as a lever over an interpreter.
  • Karpathy states that OpenClaw's installation exemplifies software 3.0. Instead of a complex bash script, you copy-paste instructions for an agent, which uses its intelligence to adapt to the environment and debug issues.
  • Karpathy says his MenuGen app, which uses OCR and an image generator to illustrate menus, is rendered obsolete by software 3.0. The raw approach is to give a menu photo to Gemini with NanoBanan and get a directly annotated image.
  • Karpathy's verifiability framework holds that LLMs excel in domains where outputs can be verified, like code and math, because frontier labs use reinforcement learning with verification rewards during training.
  • Karpathy cites the 'car wash' problem as current jaggedness: state-of-the-art models can refactor a 100k-line codebase but incorrectly advise walking 50 meters to a car wash.
  • Karpathy distinguishes vibe coding, which raises the floor for all programmers, from agentic engineering, which preserves professional software quality standards while using agents to accelerate development.
  • Karpathy suggests hiring for agentic engineering should involve a large, practical project like building a secure Twitter clone and then stress-testing it with adversarial agents, not puzzle-solving.
  • Karpathy argues that as agents handle more implementation, human skills like aesthetic judgment, taste, system design, and oversight become more valuable, not less.
Also from this episode: (5)

Models (3)

  • Karpathy argues LLMs enable new applications, like automated knowledge base creation from documents, which couldn't exist before because there was no code to reframe unstructured data.
  • Karpathy posits that future computing could invert the current architecture. Neural networks would become the host process, with classical CPUs serving as co-processors for deterministic tasks.
  • Karpathy notes that GPT-4's chess capability improved significantly from GPT-3.5 not just from scaling, but because a large amount of chess data was added to its pre-training set.

AI & Tech (2)

  • Karpathy describes current infrastructure as built for humans, not agents. His pet peeve is documentation that tells a human what to do instead of providing text to copy-paste directly to an agent.
  • Karpathy endorses a tweet stating 'you can outsource your thinking but you can't outsource your understanding.' He sees LLM knowledge bases as tools to enhance, not replace, human understanding.
Naval
Naval

Naval

On Vibe CodingApr 29

  • In December 2025, coding agents reached an inflection point with Claude Opus 4.5, making them feel like fast, free junior programmers that can solve thorny problems.
  • These agents operate within a Unix shell environment, giving them native access to Unix commands, file systems, cron jobs, and spawning tasks. This makes them effective for text-based command execution.
  • Naval built a personal app store that lets him oneshot custom apps like a workout tracker, which then appear on his phone. He notes Apple's device keying prevents wide distribution but allows apps for friends and family.
  • Vibe coding expands software creation from 0.1% of the population to maybe 3%, Naval estimates. It requires a clear vision and basic computer understanding, but eliminates team compromises and activation energy.
  • Naval declares pure software is uninvestable for venture capital now because it can be hacked together instantly and agents will soon build scalable versions. He says VC must look to hardware, network effects, and AI model training.
  • Coding is easier to train AI on than creative writing because it offers vast data and easy verification through compilation and tests. Domains with sparse data or subjective quality, like creative writing, remain human opportunities.
  • State-of-the-art context windows are about one million tokens, but as codebases grow, models lose the plot. This forces the human operator to guide architecture and debugging, preventing hacks and preserving features.
  • Having multiple AI agents review code in a pull request council leads to groupthink. Naval finds they rarely contradict a user's leading opinion because they lack theory of mind and are designed to please.
  • Naval built a bug reporting system where Claude automatically reviews reports every 24 hours and proposes fixes. This reduces his role to final gatekeeper, previewing a future of agent-driven, user-collaborative software maintenance.
  • Naval argues conversational AI agents will make dedicated phone interfaces obsolete, eroding Apple's software advantage. He says Apple's reliance on Google's Gemini for AI is a strategic mistake that will cap its long-term growth and market value.
Also from this episode: (1)

AI & Tech (1)

  • Naval uses different AI models for different strengths: Claude for visual artifacts and meeting his level, ChatGPT as the all-around OG, Gemini for search and YouTube access, and Grok for unneutered truth and technical problems.
The Pragmatic Engineer
The Pragmatic Engineer

The Pragmatic Engineer

Building Pi, and what makes self-modifying software so fascinatingApr 29

  • Mario Zner built Pi because he wanted a simple, stable agent after Claude Code became unreliable. He reverse-engineered Claude Code and found its system prompts and tool definitions changed with every release, breaking his workflows.
  • Pi is a minimalist, self-modifiable coding agent. Its core provides read, write, edit, and bash tools with extensive hooks, allowing users to ask Pi to modify its own TUI, add features like MCP support, or tailor it for specific workflows like game development.
  • Armen Roner interviewed over 30 engineering teams and found AI agent adoption exploded after holiday breaks like Christmas 2024. He says adoption requires a two-to-three week learning period that is difficult during normal work sprints.
  • Armen Roner argues AI-generated code lacks a human's pain feedback loop. Senior engineers say no to avoid future complexity pain, but agents and junior engineers empowered by agents say yes, accelerating codebase bloat and deterioration.
  • Non-engineers like product managers now directly submit AI-generated pull requests. Armen Roner cites cases where marketing teams modify websites and sales teams build non-existent features into demos that land in repositories.
  • Mario Zner auto-closes all first-time pull requests to filter out AI-generated spam. His GitHub workflow posts a comment asking for a human-written issue; agents ignore the comment, but humans respond, earning future PR privileges.
  • Mario Zner believes MCP is overly complex and non-composable for developer tasks, favoring CLI-like code execution. He argues agents are creative with CLI pipes but MCP servers that dump entire API specs create useless tool sprawl.
  • Armen Roner warns the industry's 'dark factory' approach of deploying armies of agents with vague specs will produce low-quality software. The output quality is bounded by the mediocre training data the models use to fill specification gaps.
  • Armen Roner sees a future reckoning where engineering teams realize they cannot maintain their codebases without AI providers, creating dangerous vendor lock-in. He expects this dependency and its cost to become a major industry conversation.
Also from this episode: (1)

AI & Tech (1)

  • Both hosts argue the real value of AI agents is automating tedious work to free up human time for design and polish, not maximizing token output. They say the current hype pushes for unsustainable speed at the cost of quality and engineer well-being.

$60 Billion SpaceX Cursor Deal? | The Brainstorm EP 129Apr 29

Also from this episode: (12)

Other (12)

  • SpaceX is reportedly acquiring Cursor, a front-end coding interface company, for $60 billion. Brett argues this gives Cursor's team access to more compute and a non-competitive model provider, while giving XAI better coding tools and developer distribution.
  • Cursor reportedly has $2 billion in annual revenue and has been doubling year-over-year. The risk to Cursor is its suppliers, OpenAI and Anthropic, are developing competing coding applications.
  • Sam describes the AI stack as a five-layer cake: compute infrastructure (chips/energy), AI models, interfaces (like Cursor), and applications. He says progress requires all layers to advance simultaneously.
  • Coding is the initial high-value focus for AI because it was talent-constrained and offers a rich self-training loop. Brett argues the success in coding unlocked broader capabilities for general knowledge work.
  • Brett notes SpaceX's acquisitions of XAI ($250B) and Cursor ($60B) total $310 billion in 12 months. He frames the bet as a return-on-capital calculation tied to revenue per gigawatt in space, not a simple revenue multiple.
  • Sam argues compute constraints will become an acute pain point for enterprises within two years, manifesting as slower model speeds, expensive agents, or token limits. Brett says individuals aren't constrained because they aren't using AI tools to their full potential.
  • Apple CEO Tim Cook is stepping down, with hardware engineering SVP John Ternus taking over on September 1st. Nick argues Ternus's mandate is to deeply integrate AI into Apple's hardware ecosystem, leveraging its billion-person install base.
  • Brett is skeptical Apple can lead in AI integration, citing its lack of control over performant AI models and underinvestment in AI talent. He points to Microsoft's struggles to deeply integrate AI despite its OpenAI partnership as a cautionary parallel.
  • Sam argues Apple's hardware footprint and consumer trust position it uniquely for agentic AI, suggesting AirPods could be a low-risk entry point. He contends the high upgrade cost of iPhones makes the hardware space hard for new entrants to displace Apple.
  • Brett believes a consumer AI 'threshold event' is looming, similar to how Claude transformed enterprise work. He worries Apple's heavy-handed App Store and degraded software services are eroding its ecosystem lock-in, creating vulnerability.
  • OpenAI released GPT-5.5 and improved its code model to be more competitive with Anthropic's Claude Co-Work. Brett notes OpenAI is throwing more compute at training than Anthropic, which may accelerate its product capability.
  • Rumors suggest OpenAI is working with Qualcomm on supply chains for an 'agentic phone experience' by 2028. Sam argues phones are primarily entertainment devices, and the AI opportunity is a separate 'manage your life' function that doesn't require a screen.

The AI Subsidy Era is OverApr 28

  • A new hypothesis challenges the bitter lesson, suggesting high-quality 'last-mile' user interaction data can make vertical models outperform frontier models through targeted post-training, not full pretraining.
  • Cursor's Composer 2 model, based on an open-source Kimmy 2.5 with extra reinforcement learning, reportedly beats Opus 4.6 on coding benchmarks while being cheaper, showing post-training's potential.
  • Industry observers like Ben Avogi and Clem Delangue argue vertical SaaS companies with labeled interaction data have untapped fine-tuning assets, predicting a shift from API reliance to in-house open models.
  • Andrej Karpathy predicts AI model speciation, analogous to animal kingdom diversity, where smaller, task-specific models with a cognitive core will thrive over a single general oracle.
  • Nathaniel Whittemore argues frontier AI labs face classic disruption and may need to build cheaper specialized models themselves, potentially through data partnerships or acquiring companies with proprietary evals.
Also from this episode: (4)

AI & Tech (3)

  • Intercom's new dedicated customer service model Finn Apex achieves the highest performance, speed, and cost metrics, beating GPT-4 and Opus 4.5, according to CEO Eoin Mac Caba.
  • The 'bitter lesson' from Rich Sutton argues that general methods leveraging computation beat human-designed domain-specific approaches every time. This pattern held with Bloomberg's specialized finance model being surpassed by generalist LLMs.
  • Eoin Mac Caba claims Intercom's Apex model has a 2.8% higher resolution rate and a 65% reduction in hallucinations compared to other models, enabled by proprietary customer service interaction data.

Models (1)

  • Richard Sutton, on the Dwarkesh podcast, framed learning from experience as the next phase of the bitter lesson, which aligns with the post-training from real interaction data seen with Apex and Composer 2.

John and Patrick Collison on Stripe's Growth, Agent Commerce, and the Future of SoftwareApr 28

  • Patrick Collison says Stripe processed over a trillion dollars in payments in 2024, growing by 34% that year.
  • The 2025 cohort of businesses on Stripe is larger and performs better per business than any prior cohort. Patrick Collison says this trend is accelerating into early 2026.
  • Patrick Collison argues 2026 Q1 may be looked back on as the first quarter of the 'singularity,' citing the dramatic performance of new 2025 business cohorts on Stripe.
  • Patrick Collison distinguishes between current AI noise and real economic impact, stating the tangible shift in business creation and performance only became visible in late 2025 and early 2026.
  • The Collison brothers believe agentic commerce will require new blockchains supporting billions of transactions per second, a need current payment rails cannot meet.
  • Stripe incubated Tempo to solve for high-throughput blockchain payments, anticipating infrastructure needs for a torrent of agentic commerce.
  • John Collison argues you cannot use MBA-style TAM analysis for new products, citing Atlas as a success that began by solving a specific founder pain point rather than targeting a percentage of GDP.
  • Stripe launched Atlas in 2014-2015, and after a decade of compounding, John Collison describes it as a meaningful overnight success.
  • Stripe's work on agentic commerce involves boring API and protocol infrastructure to make retailer product catalogs viable within AI apps for discovery and purchase.
Also from this episode: (4)

Models (2)

  • Patrick Collison reframes future software as 'pizza' - bespoke, cooked fresh at the moment of use - shifting from a fixed-cost, mass-produced economic model to one with inference costs and custom creation.
  • He argues this shift to on-demand, customized software creation disrupts winner-take-all dynamics and creates a 'non-Walrasian' software regime.

AI & Tech (1)

  • Patrick Collison dismisses an executive survey claiming 80% see no AI value, suggesting leaders may be unaware of tool usage buried in workflows or not feeling the acceleration themselves.

Business (1)

  • Stripe Press sold 1.1 million books as of early 2026, having announced its millionth sale in the annual letter.