AI is on the verge of automating its own inventors. Anthropic co-founder Jack Clark puts a 60% probability on AI research and development operating without human input by the end of 2028. Within his own company, engineers have already stopped writing code, acting instead as auditors for Claude’s output.
"Once a species builds a system that learns to build itself, the traditional innovation curve becomes irrelevant."
- Naveen Rao, This Week in AI
The technology is already proving its utility in high-stakes fields. A study at Boston’s Beth Israel Medical Center found OpenAI’s o1 model identified correct diagnoses 67% of the time, while emergency room physicians averaged 50-55%. The bottleneck to deployment is legal, not technical. As Trey Halterman noted on This Week in AI, human doctors hold the licenses and liability, creating a buffer between superior AI diagnostics and patient treatment.
Adoption is being forced through a new channel: private equity. OpenAI’s $10 billion venture with TPG and Anthropic’s $1.5 billion deal with Blackstone, discussed on Moonshots, let investment firms mandate AI integration across thousands of portfolio companies. This strategy bypasses corporate middle management entirely, accelerating a forced transition to automated workflows.
Regulators are scrambling to catch up. The White House, reacting to models like Claude Mythos that can discover cybersecurity vulnerabilities the government cannot, is signaling a shift toward mandatory pre-release model vetting. This move creates a tension between national security and competitive speed, with some warning it could cause the U.S. to fall behind.
The financial scale of the race is unprecedented. Morgan Stanley forecasts hyperscaler spending on AI compute will hit $1.1 trillion by 2027, a surge driven by the new primary constraint: energy scarcity. The winner won’t just have the best model, but the most efficient way to power it.


