Driver-assist software will not simply evolve into full autonomy.
That’s the core argument from Waymo's Dmitri Dolgov on The a16z Show. He claims there is a fundamental split in the industry. While some competitors believe L2 systems will bridge the gap to L4 autonomy through data accumulation, Dolgov views them as qualitatively different problems that require a different technical foundation from day one.
Waymo's foundation is a hierarchy of AI models. Dolgov describes a massive off-board model that acts as a source of truth about the physical world. This “teacher” model trains three specialized “student” AIs: one that drives the car in real time, one that generates simulation environments for testing, and a “Critic” that evaluates performance and provides feedback. This architecture allows for high-capacity intelligence on vehicle hardware without cloud latency.
This philosophy extends directly to hardware. Waymo insists on a triad of LiDAR, radar, and cameras, rejecting the vision-only approach. Dolgov argues each sensor covers for the failure modes of the others. He offered a concrete example: a Waymo car detected a pedestrian’s feet under a bus by bouncing LiDAR pulses off the pavement, predicting the person’s path before they were visible to the camera.
Critics of this approach point to the high cost of sensors like LiDAR. Waymo's sixth-generation hardware, however, is designed to bring these capabilities to commodity prices. Dolgov notes that radar has already followed a steep cost-reduction curve, and LiDAR is now on the same path.
The core research phase appears to be over. Scaling is the new bottleneck. Waymo is performing 500,000 fully autonomous rides per week and shifting from retrofitted Jaguars to custom-built vehicles without steering wheels. For Waymo, the challenge has moved from “can it drive?” to the operational “dance” of managing a massive, automated fleet. The bet is that a purpose-built system will outpace one that carries the baggage of human-centric design.
