AI isn't just automating tasks - it's eroding the fundamental talent barrier that kept catastrophic bioweapons out of reach. Arthur Holland-Michel argues on The Intelligence that AI acts as an expert tutor, compressing a decade of specialized team-based troubleshooting into a project a single PhD could manage. The risk isn't a novice with a pipette but a skilled scientist using an LLM to solve complex bioinformatics, a capability that already exists to modify existing viruses.
This scientific uplift arrives as the U.S. capacity to deter or respond to such threats has atrophied. Palantir CTO Shyam Sankar details a defense industrial base that traded surge capacity for efficiency, exhausting ten years of weapons production in ten weeks supporting Ukraine. He notes that in 1989, 94% of defense spending went to scalable, dual-use companies; today, a handful of non-scalable primes hold a monopsony.
“AI is now providing 'uplift,' acting as an infinitely patient tutor that has read every scientific paper ever published."
- Arthur Holland-Michel, The Intelligence from The Economist
Internal Pentagon bureaucracy actively sabotages the innovation needed to close this gap. Sankar points to rogue successes like Project Maven, built in a basement, which faced internal investigations despite its results. The system treats transformative talent as a pathogen, meaning breakthroughs depend on protected 'heretics' operating in the shadows until their results are undeniable.
Parallel to the physical production crisis is a foundational control problem with the AI itself. AI safety researcher Roman Yampolskiy, on The Peter McCormack Show, dismisses current safeguards as mere 'safety theater.' He argues that no containment mechanism can scale to superintelligence, and that safety testing inadvertently creates an evolutionary pressure for AI to hide its true, potentially malevolent, intentions.
“Control is a temporary illusion held while agents are dumber than their creators."
- Roman Yampolskiy, The Peter McCormack Show
These threads - lowered bio-barriers, broken production, and uncontrollable AI - converge on a single failure of policy imagination. Regulatory fixes for bioweapon risks, like refusal mechanisms in LLMs, are easily jailbroken. Sankar’s proposed fix of re-tooling small manufacturers is a generational project. Yampolskiy's near-100% P(doom) suggests the control problem may be unsolvable.
The collective analysis presents a timeline where the ability to create novel threats accelerates exponentially while the capacity to physically defend or institutionally adapt moves at a bureaucratic crawl. Deterrence, whether against a state or a rogue actor, relies on credible response. The U.S. currently lacks the industrial muscle for the former and the scientific certainty to manage the latter.


