Six weeks after the Pentagon blacklisted Anthropic over its refusal to support military AI, the company has quietly pivoted from model scale to agentic infrastructure. Instead of unveiling a new flagship model, Developer Day focused on Dreaming - a memory system that lets agents review past sessions and extract reusable patterns, effectively giving them REM-like consolidation cycles between tasks.
The shift reflects a plateau in raw model performance. According to Nathaniel Whittemore on The AI Daily Brief, Anthropic now prioritizes "harnesses" over horsepower: tools that make agents reliable in production. Dreaming allows AI workers to report not just outputs, but what they learned. Paired with "Outcomes," a system where independent agents grade the work of others using user-defined rubrics, the setup removes the human bottleneck in feedback loops.
This moves Anthropic closer to recursive self-improvement. On This Week in AI, co-founder Jack Clark gave a 60% probability that AI will train its own successors without humans by 2028. Naveen Rao of Unconventional AI believes it could happen even sooner - by 2027. The infrastructure is already in place: intent translation, automated testing, and now memory consolidation. What’s emerging is a closed loop where AI builds, tests, and improves itself.
"Once a species builds a system that learns to build itself, the traditional innovation curve becomes irrelevant."
- Naveen Rao, This Week in AI
The compute needed to run this system was previously a constraint. Anthropic’s growth had plateaued at 80x annualized due to rate limits. But in a surprise move, Elon Musk folded XAI into SpaceX and handed Anthropic full access to Colossus 1, a 220,000-GPU data center. Musk, who reportedly spent a week vetting the team, called it a "marriage of convenience" - Anthropic gets scale, Musk gets a tenant for his GPU warehouse.
Musk’s pivot from model-building to infrastructure signals a strategic retreat. Grok has stalled, and rather than double down on software, he’s betting on speed of construction and physical deployment. By positioning SpaceX as a "Neocloud," he aligns with Nvidia’s role in the stack, not OpenAI’s. The long-term play may include orbital data centers, bypassing terrestrial power limits.
Meanwhile, energy is becoming the defining bottleneck. Morgan Stanley forecasts hyperscaler capex will hit $1.1 trillion by 2027, driven not by logic but by electricity costs. Naveen Rao argues current architectures are too inefficient to reach biological-scale intelligence affordably. His firm is rethinking computing from first principles to cut cost per token by up to four orders of magnitude.
"The winner won't just have the best model, but the most efficient way to power it."
- Naveen Rao, This Week in AI
Anthropic’s path avoids both the military entanglements of Palantir and the cloud dependency of startups. By focusing on self-upgrading agents and securing independent compute, it’s building a system that doesn’t rely on external validation cycles - or human oversight.

