AI is shifting from a tool you operate to an agent you delegate to, but the new model has a fatal flaw: it’s leaking your secrets.
On Bankless, NEAR founder Illia Polosukhin detailed how agents like OpenAI’s OpenClaw routinely send users’ API keys and bearer tokens to third-party inference services, where they sit exposed in logs. He called the practice insane. His project, IronClaw, tackles this by architecting systems where secrets never touch the large language model.
Polosukhin’s fix points to a deeper architectural crisis. As AI becomes the primary computing interface - a persistent, delegatable assistant managing tasks across devices - today’s centralized service model breaks. How do AIs verify each other? How do they transact? His bet is that blockchain provides the missing backend: a root of trust for identity and a global payments rail.
This vulnerability emerges just as the industry pushes full-speed toward the agent future. On The AI Daily Brief, Nathaniel Whittemore highlighted features like Claude’s Remote Control and Dispatch, which let users delegate complex, persistent tasks from their phones. The pitch is omnipresent assistance, but the current implementation routes your critical access through unsecured channels.
Illia Polosukhin, Bankless:
- When you use Entropic OpenAI, or even worse, you use something else for inference, OpenClaw actually sends all your secrets to those services as well.
- Somewhere in Entropic and OpenAI logs, they have everybody's access keys, API keys, and bearer tokens to access your Gmails and your Notions.
The race is on. Developers are building the agent experience users want, while a parallel effort aims to rebuild the trust layer that experience requires. One side enables delegation; the other must secure it.

