The era of dumping 5,000 tokens of instructions into an AI at once is over. Nathaniel Whittemore on The AI Daily Brief calls it 'context bloat' - a 2025 hangover where agents slowed to a crawl, then failed, under the weight of their own instructions. The fix isn’t better prompting. It’s software engineering.
Anthropic’s Claude Code team pioneered a shift to modular 'Agent Skills' using progressive disclosure. Instead of loading every rule upfront, agents now parse lightweight metadata and pull in specific skills - markdown files, scripts, or assets - only when needed. This keeps context clean and execution focused. OpenAI and GitHub Copilot have already adopted similar architectures.
"The era of the massive system prompt is over."
- Nathaniel Whittemore, The AI Daily Brief
Reliability is now the bottleneck. Whittemore notes that model updates routinely break old prompts, making maintenance unsustainable. Anthropic’s new Skill Creator tool addresses this by automatically rewriting vague skill descriptions, improving triggering accuracy in five out of six tests. It also enables A/B testing and benchmarking - treating skills like software, not text.
The mental model is shifting from one-off conversations to libraries of repeatable capabilities. Notion now lets users turn any page into a reusable skill. Replit’s Amjad Masad sees the same trend: AI agents now perform at Google mid-level engineer level, but only if they don’t get lost in their own instructions. The advantage now belongs to those who manage agents like codebases.
"Building true wealth requires prioritizing equity over high-salary 'bullshit work.'"
- Amjad Masad, The a16z Show
Prompts are no longer disposable. They’re durable assets. Whether it’s a custom workflow in Notion or a verification skill that checks an agent’s output, the goal is to teach the AI once and reuse it everywhere. The best skills encode human preferences - team processes, tone, decision logic - not just technical functions. These don’t expire when models improve.



