04-12-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI agents replace junior developers, pressuring software valuations

Sunday, April 12, 2026 · from 5 podcasts, 6 episodes
  • DHH's 37signals and other startups are using AI coding agents instead of hiring junior developers or QAs, shifting costs from salaries to token budgets.
  • Box CEO Aaron Levie warns enterprise adoption will lag due to security and permission risks, but agent-first API design is now mandatory.
  • A 'distillation' trend sees workers building AI to automate colleagues' jobs, while founders use agents to replace entire functions like marketing.

David Heinemeier Hansson’s pivot is a bellwether. Six months after criticizing AI tools, the 37signals CTO now describes engineering as “the high-level supervision of autonomous AI agents.” His team uses them to tackle projects previously deemed too time-consuming, effectively substituting agent oversight for junior developer labor.

This isn't just a productivity boost; it’s a direct swap. Ryan Carson, after closing a seed round, refused to hire human staff, deploying an AI agent as a chief of staff and preparing another for marketing. On This Week in Startups, he argued human employees are fallible and leave, while agents offer compounding improvements.

The economic model is shifting from fixed payroll to variable compute. Martin Casado noted on the a16z Podcast that infrastructure spend is exploding across his portfolio because AI enables a massive increase in software output. CFOs now face a new line item: the engineering compute budget for tokens, which directly impacts earnings.

“If 99% of your traffic comes from autonomous systems, the software’s value is no longer in how it looks, but how reliably an agent can navigate its logic.”

- Aaron Levie, The a16z Show

Enterprise adoption faces a wall of legacy risk. Aaron Levie argues that while a startup can deploy agents with total context, a bank like JPMorgan faces existential security threats from prompt injection or a rogue agent deleting data in a loop. He predicts a prolonged “read-only” era for corporate AI, where agents can report but not act, creating a massive agility gap.

The permission model is broken. Agents are legal extensions of their users, requiring total oversight and lacking a right to privacy. This breaks traditional role-based access controls, forcing a redesign of security frameworks before widespread deployment.

A darker trend is emerging in competitive job markets. On This Week in Startups, reports detailed a “distillation” trend in China, where employees build AI agents to perform their colleagues' tasks, aiming to make rivals redundant before layoffs. This turns the workplace into an automation arms race.

“He described the shift as a transition from manual labor to intoxicating supervision.”

- The Pragmatic Engineer, on DHH

The long-term lock-in risk is significant. Kanjun Qiu of Imbue warned on This Week in AI that closed AI agents let companies like Anthropic or OpenAI own a user’s data, memories, and workflows, creating a future where users rent back their digital lives. Her company is building open-source infrastructure to commoditize the model layer and preserve user sovereignty.

As execution commoditizes, value shifts upstream. DHH argues that with AI handling syntax, human taste in design becomes the scarce resource. Peter Yang echoed this, noting that AI lets founders “vibe code” internal tools, churning expensive SaaS subscriptions. The bottleneck is no longer writing code, but defining what to build and ensuring it’s correct - a shift that will pressure software companies built on labor-intensive services.

Source Intelligence

What each podcast actually said

#495 – Vikings, Ragnar, Berserkers, Valhalla & the Warriors of the Viking AgeApr 9

Also from this episode:

History (14)
  • The Viking Age began with the raid on the monastic island of Lindisfarne on June 8, 793 AD. The attack was psychologically devastating because it violated the sacred sanctuary of the church, a core tenet of medieval European society.
  • Lars Brownworth states the Vikings were not full-time warriors but mostly farmers and merchants. The term 'Viking' likely derives from 'vík,' Old Norse for a bay or inlet, and was their activity, not their identity.
  • Viking longships were a revolutionary technology. They were clinker-built, could cross the Atlantic with a draft of less than two feet, and travel up shallow rivers. Their speed of 70-120 miles per day gave them a massive military advantage over land armies.
  • Brownworth argues terror was a deliberate Viking weapon. They would attack on high holy days like Easter and Christmas, used intelligence gathered while trading, and understood the Christian calendar to maximize impact and plunder.
  • The historical Ragnar Lothbrok is likely a composite figure from several 9th-century leaders. His legendary sons, including Ivar the Boneless and Bjorn Ironside, were historical and led the Great Heathen Army that invaded England in 865.
  • Brownworth describes the Viking 'blood eagle' execution as a ritual where a victim's lungs were removed through the back while still alive, causing them to flutter like wings. It was reportedly performed on King Ælla of Northumbria.
  • Viking social and military organization was decentralized and meritocratic. A Viking famously told a Frankish ambassador 'we have no king, we are all kings,' reflecting a flat structure where leadership was earned through success and 'ring-giving.'
  • The Viking Rollo (Hrólfr) made the Treaty of Saint-Clair-sur-Epte with Frankish King Charles the Simple in 911, founding Normandy. Within a generation, his descendants shed Viking language and religion, integrating into Frankish culture but retaining a vital, ambitious spirit.
  • Brownworth views the Normans as the pivotal force that transformed a backward, inward-looking Europe into a confident, outward-looking civilization. They created powerful states like England and Sicily and led the First Crusade.
  • The Norse religion centered on an eternal struggle between order (gods) and chaos (monsters), which chaos would eventually win at Ragnarök. Odin was the god of elites, war, and poetry, while Thor was the god of farmers and common people.
  • Valhalla was the Viking afterlife for warriors who died in battle. There, they would fight all day, have their wounds healed at night, and feast endlessly, preparing for the final battle of Ragnarök.
  • Leif Erikson, son of Eric the Red, landed in North America around the year 1000, naming it Vinland. The Viking settlement at L'Anse aux Meadows lasted only about three years before conflicts with Native peoples and failure to adapt their farming practices led them to abandon it.
  • Swedish Vikings (Varangians) traveled east, establishing trade routes down Russian rivers to the Black and Caspian Seas. After failing to sack Constantinople, many joined the Byzantine Emperor's Varangian Guard, an elite mercenary unit.
  • Brownworth credits the Byzantine Empire with protecting Western Europe for centuries by acting as a buffer against eastern invasions. Its fall also helped jumpstart the Italian Renaissance, as Greek scholars fleeing Constantinople reintroduced classical knowledge to the West.
Media (1)
  • Lars Brownworth created '12 Byzantine Rulers,' one of the first history podcasts, in June 2005. He recorded it initially as a framework for his brother after realizing his spoken explanations of Byzantine history were unclear.
The Pragmatic Engineer
The Pragmatic Engineer

The Pragmatic Engineer

DHH's new way of writing codeApr 9

  • DHH argues that aesthetically beautiful software is more likely to be correct, a principle he finds true in mathematics, physics, and other domains.
  • DHH switched from skeptical of AI coding tools to using them extensively, driving a 180-degree turn in his workflow after a few weeks of experimentation.
  • AI agents allow his team to tackle internal projects they would never have started before, making engineers more ambitious and productive than ever.
  • He finds supervising AI agents for one hour can be highly effective and intoxicating, leading people to work harder than before.
  • DHH built the Linux distribution Umachi from scratch on Arch and Hyprland as a personal itch-scratching project, and it quickly gained a community.
  • He sees Ruby on Rails having a renaissance due to its token efficiency, making it ideal for AI agent workflows that still require human-readable code.
  • DHH started programming on the internet in 1994 and began building Ruby on Rails in 2003 when he chose Ruby to build Basecamp without external mandates.
  • He believes your unique spin on an idea matters more than its novelty, proven by projects like Rails, Kamal, and Umachi finding large audiences.

What's Left for Humans When AI Builds Everything?Apr 8

  • Kanjun Q argues AI agents represent a dangerous future where companies like Anthropic or OpenAI, once they own a user's data, memories, and life's work, can exert excessive influence and lock users into their ecosystems.
  • Kanjun Q's company Imbue is building open-source infrastructure to run agents in parallel, aiming to commoditize the underlying model layer and give users control to swap out providers and retain their data.
  • Karina Hong argues that verifying AI-generated code is critical for safety, citing the formally verified Paris subway automatic switching system and European Space Agency's Ariane spacecraft as precedents.
  • Hong's company Axiom built an AI mathematician that achieved a perfect score (120/120) on the Putnam exam, the first AI to do so in the competition's 100-year history.
  • Jonathan Siddharth says Turing sells specialized data to frontier AI labs to train models on coding, enterprise workflows, and STEM tasks, then uses insights from enterprise deployments to create a feedback loop for model improvement.
  • The group discusses Anthropic's explosive revenue growth to a $30 billion run rate, which reportedly surpassed OpenAI's token sales, driven largely by its strength in AI-assisted coding tools like Claude Code.
  • Siddharth and Hong assert that training AI models on code improves their general reasoning abilities, likely because coding provides clear, verifiable feedback and teaches algorithmic, step-by-step thinking.
  • Kanjun Q says Imbue's engineering workflow has been transformed by coding agents, with one team lead autonomously generating 60-70 pull requests overnight, drastically increasing code output.
  • Siddharth describes automating the CEO role at Turing by building a 'virtual chief of staff' AI that aggregates data from Salesforce, Jira, and GitHub to create executive briefs on company status.

Also from this episode:

AI & Tech (5)
  • Siddharth claims there is unlimited demand for high-quality training data as models improve, requiring hiring expert humans across industries to generate data for imitation or reinforcement learning.
  • The hosts critique Meta's reported internal policy of measuring team output by tokens consumed, which Kanjun Q says leads to gaming the system, like writing bots to burn tokens in a loop.
  • The group debates workplace surveillance, with Jason Calacanis arguing that tracking work computers is necessary for elite performance and security, drawing a parallel to NBA teams monitoring player biometrics.
  • Kanjun Q warns of a default future path where verticalized AI companies (OpenAI, Anthropic, Google) lock users in, renting back their 'digital selves,' versus an open-source path where users own and control their agents.
  • Karina Hong envisions a future with 'a billion AI mathematicians' accelerating discovery, shortening the timeline from mathematical breakthrough to applied science from centuries to days.

The Agent Era: Building Software Beyond Chat with Box CEO Aaron LevieApr 8

  • Aaron Levie argues that the diffusion of AI capability across enterprises will be slower than Silicon Valley expects, citing entrenched domain knowledge in systems like SAP and new security and operational complexities.
  • The central enterprise question is how to build software for a future where AI agents outnumber human users by factors of 100 or 1000 to one. This shifts focus to designing robust APIs, access controls, and monetization for agents.
  • A successful emerging paradigm gives coding agents access to SaaS tools and internal workflows, enabling them to both read information and use APIs or write code to execute tasks. This is exemplified by tools like OpenAI's 'super app' and Perplexity Computer.
  • Steve Sinofsky observes that agents do not seek simpler interfaces but choose backends based on cost, durability, and reliability. He contends the industry's focus on marketing to agents via APIs is wrong, as agents select systems based on underlying quality, not interface polish.
  • A major operational challenge is coordinating thousands of autonomous agents acting on shared systems, like a Box repository, which risks creating conflicting operations, performance issues, and security vulnerabilities that CFOs and CIOs must manage.
  • The permission model for agents is complex. While the 'end-to-end argument' suggests treating them like separate humans with their own accounts, agents are legally extensions of their users, requiring full oversight and lacking a right to privacy, which breaks traditional RBAC models.
  • Current AI agents struggle with information containment, as data in the context window can potentially be extracted via prompt injection. This makes it difficult to securely grant agents access to highly confidential resources like M&A data rooms.
  • Sinofsky predicts a widening gap in adoption speed between startups, which can adopt agents freely, and large enterprises like JP Morgan, which face significant legacy system and risk constraints, slowing AI diffusion.
  • There is tension between legacy SaaS vendors and the agent ecosystem, as agents want unlimited API access to data for operations, while vendors have traditionally monetized intelligence and domain expertise through UI-based subscriptions, not pure data licensing.
  • Martin Casado notes that every infrastructure company in his portfolio of about 50 has seen asymptotic growth in the last six months due to an unprecedented increase in software being written, driven by AI agent development.
  • A key friction is the current high cost of tokens, which pushes the industry toward usage-based pricing. This creates a short-term budgeting nightmare for engineering teams deciding between experimental waste and perfect optimization.

Also from this episode:

AI & Tech (3)
  • The engineering compute budget for AI tokens is becoming a critical financial debate. CFOs must decide what percentage of R&D spend should go to tokens, a decision that directly impacts earnings per share given R&D typically constitutes 14% to 30% of tech company revenue.
  • Sinofsky argues Wall Street is mis-modeling the AI economic opportunity by assuming a fixed revenue pie. He draws parallels to the PC and cloud eras, where new usage models created demand orders of magnitude larger than initially projected.
  • Sinofsky contends the token cost issue is transitional, comparing it to historical transitions like mainframe MIPS pricing. He believes the cost will plummet due to increased supply, algorithmic improvements, or hardware changes, making compute abundant.

Peter Yang on Small Teams, Coding Agents, and Why Human Ambition Has No CeilingApr 6

  • Peter Yang argues that coding, through agents, will consume all knowledge work as the technology allows for direct task automation. He points to tools like Lovol and Replic as examples of this trend.
  • OpenClaude's primary appeal for Yang is its personal interface, which he estimates is 80% of its value. The mobile messaging and voice features make it feel more human than traditional AI chatbots.
  • Yang believes applications used for completing specific tasks will decline first as users shift to asking agents to perform those tasks directly. He sees this as more efficient than opening separate apps.
  • He argues that large companies become worse places to work due to alignment overhead. Yang hopes the rise of agents allows more companies to stay small with tiny product teams augmented by AI.
  • For content creation, Yang's workflow now begins with AI generating the first 80% of a document. He then provides feedback and edits to refine the output rather than starting from a blank page.
  • Coding agents create a variable-schedule reward system similar to social media, where the time to complete a task and the quality of output are unpredictable. Yang compares this dynamic to a slot machine.
  • He observes that product managers in large corporations aspire to be creators and innovators, but most lack the skill. Many PMs are now learning to code with AI tools on nights and weekends.
  • Yang sees a shift where a tough job market pushes people toward entrepreneurship. He views agents and no-code tools as enabling solopreneurs to build small, viable businesses.
  • The emerging agent stack includes new primitives for identity, payments, marketing, and connections like MCP. Yang and Anish Atarya agree this requires a new playbook beyond traditional SaaS models.
  • He distinguishes between Claude Code for exploratory, chatty coding and Cursor for more precise, thoughtful work. He finds Claude Code's UI features, like pasting screenshots directly, superior for flow.
  • Atarya sees AI products rarely achieving 100% automation of a job. Most provide dramatic productivity lift but leave a final percentage for humans, making them expensive software rather than cheap labor.
  • OpenClaude's default memory system uses a daily-updated text file and is prone to forgetting. Yang uses a complex third-party memory system to improve recall by forcing the agent to search before answering.

3 AI Agents That Actually Replaced Human Jobs | E2272Apr 7

  • Ryan Carson used funding from a closed seed round not to hire people, but to deploy his AI agent 'Claw Chief' as a chief of staff and is preparing another to act as marketing manager.
  • Alex Finn argues the corporate strategy of automating co-workers is misguided. He advocates using AI agents to automate one's own role to build an external business, thereby escaping corporate constraints.
  • Ryan Carson disclosed that running his 'Claw Chief' agent on Claude Opus for one day would cost between $100-$200, highlighting the massive subsidies and cash burn by AI labs for power users.
  • A method called 'Caveman Claude', which reduces prompt token use by 75% by stripping language to basic verbs, went viral. Own Patel demonstrated it could complete a web search task using only 45 tokens versus 180.
  • Yazin Ali Raheem demoed 'Sidecast', an AI sidebar for live podcasts that uses personas like a fact-checker and archivist to provide real-time insights and citations during a broadcast.
  • Ryan Carson open-sourced 'Claw Chief', an OpenClaw protocol designed to function as an executive assistant. It uses cron jobs and detailed skill markdown files to autonomously handle email, scheduling, and business development.
  • Brex built a system called 'Crab Trap' where one LLM monitors another agent's network traffic in real-time, intercepting and blocking harmful actions before they execute, creating an adversarial safety layer.
  • Alex Finn announced 'Henry Intelligent Machines', a system of autonomous agent swarms that scour sites like Reddit and X to identify business challenges, then autonomously build and launch ventures to solve them.
  • OpenClaw released a new version with a 'dreaming' feature that consolidates memories overnight, analogous to human sleep, and is reportedly optimized for GPT-5.4.

Also from this episode:

Enterprise (1)
  • Jason Calacanis notes a counternarrative to AI-driven job loss, citing Marc Andreessen's tweet that AI-driven productivity gains will create a massive jobs boom, but believes it will still require fewer humans in the loop.
AI Infrastructure (3)
  • Anthropic announced it will stop allowing Claude subscriptions to cover third-party tool access like OpenClaw, switching to a pay-as-you-go API model. Exec Boris Churnney cited unsustainable usage patterns and a need to prioritize direct customers.
  • Alex Finn predicts AI labs like Anthropic and OpenAI will introduce $2,000 per month consumer subscription plans within the year, arguing they have hooked users on productivity and will now appropriately price it.
  • Jason Calacanis forecasts the LLM industry's total investment 'J-curve' will reach $500 billion, which companies must become profitable to repay within three to four years.
Models (1)
  • Alex Finn argues that model quality is the only metric that matters for AI companies, citing how people still use Claude Opus despite Anthropic's poor developer relations because it remains the best model.