The real test for AI isn't conversation - it's whether a refinery explodes. As humanoid robots capture venture capital and viral demos, founders building for critical infrastructure argue the industry’s priorities are dangerously misplaced.
Jake Lusararian, CEO of Gecko Robotics, started with a dorm-room thesis 13 years ago: use robots to gather physical-world data that prevents catastrophes. His company now inspects bridges, ships, and power plants. He sees the current AI boom not as a shift, but as validation. The surge in models has intensified the need for what he calls “deterministic” outcomes - predictable, non-hallucinatory data where a mistake means lives, not just a confused chatbot.
Jake Lusararian, This Week in AI:
- The models are putting a huge spotlight on the importance of valuable data sets that don't hallucinate.
- Especially with things that if they do hallucinate, could cause an explosion and kill people.
The demand for this precision crashes into a fragmented hardware landscape. Chris Lattner, CEO of Modular, describes the current AI software stack as “duct tape and bailing wire.” Chipmakers like Nvidia, Apple, and AMD build proprietary software layers that don’t interoperate, forcing developers into vendor lock-in and slowing the deployment of reliable systems.
Lattner’s company is building a portability layer to break this stranglehold, aiming to let models run on any hardware. The goal is to enable the optimized, mission-critical applications that infrastructure robotics require.
The convergence points to a pragmatic turn. The next phase of AI isn’t about demos, but deployment - in dry docks and data centers where errors have concrete, and sometimes fatal, costs.

