The ‘AI subsidy’ era is over. Six weeks after a prior report predicted AI’s disruption of corporate moats, the trend is playing out in real time.
Intercom CEO Eoin Mac Caba just claimed the top spot in customer service AI. His new Apex model reportedly beats GPT-4 and Opus 4.5 on resolution rates while slashing costs. Cursor followed a similar path in coding, with its Composer 2 model built on open-source weights now rivaling Anthropic's Opus 4.6.
“We're seeing a shift from the generalist era to the vertical era.”
- Nathaniel Whittemore, The AI Daily Brief
This is a classic disruption pattern. Frontier labs like OpenAI over-serve the market with expensive, general intelligence, while leaner companies pick off profitable verticals. According to Nathaniel Whittemore on The AI Daily Brief, companies like Pinterest and Notion already find it faster and cheaper to train and run open models themselves.
The source of power has shifted. Andrej Karpathy predicted this AI model speciation, where smaller, task-specific models thrive. The key is not raw compute or internet-scale data, but high-quality ‘last-mile’ interaction data that frontier labs cannot access. Intercom’s Apex uses proprietary customer service logs, and Cursor’s Composer 2 leverages developer feedback loops.
This redefines the ‘bitter lesson’ - Rich Sutton’s axiom that computation beats human design. He now argues learning from experience is the next phase. Vertical winners are scaling learning on human feedback, not just scaling parameters on static text.
The economic logic is clear. Decagon co-founder Ashwin Srinivasan reports 80% of their traffic runs on internal models. Paying the API tax to resell another company’s compute is now a luxury, not a necessity. The frontier labs face a choice: build cheaper specialized models themselves or watch their most lucrative customers walk away.

