AI hallucinations aren't just digital errors; in a refinery, they cause explosions that kill people. This physical risk is driving a quiet but urgent pivot away from generalist AI demos and toward deterministic, purpose-built systems.
Jake Lusararian, CEO of Gecko Robotics, has spent 13 years building robots to inspect critical infrastructure like bridges and power plants. He argues the recent AI boom only intensifies the need for high-fidelity, non-hallucinatory data from the physical world. The real economic value, he says, lies in predicting a pipe failure, not in building a bipedal robot for social media.
Jake Lusararian, This Week in AI:
- The models are putting a huge spotlight on the importance of valuable data sets that don't hallucinate.
- Especially with things that if they do hallucinate, could cause an explosion and kill people.
This demand for precision runs headlong into a fractured hardware ecosystem. Chris Lattner, CEO of Modular, describes the current AI software stack as "duct tape and bailing wire." NVIDIA, Apple, and AMD each build proprietary layers for their chips, forcing developers into vendor lock-in and stifling the portability needed for specialized, real-world deployment.
The convergence is clear: the applied AI future is being built by companies solving for reliability over novelty, connecting accurate models to physical reality through specialized robots and unifying software layers.

