David Heinemeier Hansson’s pivot is a bellwether. Six months after criticizing AI tools, the 37signals CTO now describes engineering as “the high-level supervision of autonomous AI agents.” His team uses them to tackle projects previously deemed too time-consuming, effectively substituting agent oversight for junior developer labor.
This isn't just a productivity boost; it’s a direct swap. Ryan Carson, after closing a seed round, refused to hire human staff, deploying an AI agent as a chief of staff and preparing another for marketing. On This Week in Startups, he argued human employees are fallible and leave, while agents offer compounding improvements.
The economic model is shifting from fixed payroll to variable compute. Martin Casado noted on the a16z Podcast that infrastructure spend is exploding across his portfolio because AI enables a massive increase in software output. CFOs now face a new line item: the engineering compute budget for tokens, which directly impacts earnings.
“If 99% of your traffic comes from autonomous systems, the software’s value is no longer in how it looks, but how reliably an agent can navigate its logic.”
- Aaron Levie, The a16z Show
Enterprise adoption faces a wall of legacy risk. Aaron Levie argues that while a startup can deploy agents with total context, a bank like JPMorgan faces existential security threats from prompt injection or a rogue agent deleting data in a loop. He predicts a prolonged “read-only” era for corporate AI, where agents can report but not act, creating a massive agility gap.
The permission model is broken. Agents are legal extensions of their users, requiring total oversight and lacking a right to privacy. This breaks traditional role-based access controls, forcing a redesign of security frameworks before widespread deployment.
A darker trend is emerging in competitive job markets. On This Week in Startups, reports detailed a “distillation” trend in China, where employees build AI agents to perform their colleagues' tasks, aiming to make rivals redundant before layoffs. This turns the workplace into an automation arms race.
“He described the shift as a transition from manual labor to intoxicating supervision.”
- The Pragmatic Engineer, on DHH
The long-term lock-in risk is significant. Kanjun Qiu of Imbue warned on This Week in AI that closed AI agents let companies like Anthropic or OpenAI own a user’s data, memories, and workflows, creating a future where users rent back their digital lives. Her company is building open-source infrastructure to commoditize the model layer and preserve user sovereignty.
As execution commoditizes, value shifts upstream. DHH argues that with AI handling syntax, human taste in design becomes the scarce resource. Peter Yang echoed this, noting that AI lets founders “vibe code” internal tools, churning expensive SaaS subscriptions. The bottleneck is no longer writing code, but defining what to build and ensuring it’s correct - a shift that will pressure software companies built on labor-intensive services.




