The mental model for using AI is changing from operating a tool to delegating to a coworker. On The AI Daily Brief, Nathaniel Whittemore detailed features like Anthropic’s Dispatch, which creates a persistent AI agent that works locally on a user's machine. Users can start a complex task on their desktop, then check progress and give new instructions from their phone, fundamentally altering daily workflows.
This shift towards autonomous agents creates a critical security vulnerability. As Illia Polosukhin, co-author of the seminal “Attention Is All You Need” paper, explained on Bankless, current agent architectures like OpenAI’s OpenClaw are leaking user secrets. They send API keys, bearer tokens, and access credentials to external services where they sit exposed in logs.
Polosukhin called the practice “insane.” His project, IronClaw, aims to solve it by ensuring sensitive keys never touch the large language model. His broader thesis is that as AI becomes the primary computing interface - a new operating system - today’s trust and payment infrastructure will break entirely. He argues blockchain could provide the necessary backend for identity, payments, and secure coordination between agents.
The race is on to build agents that are both useful and secure. The features praised by users, like remote task management and parallel execution, require deep system access. The convenience of an omnipresent AI coworker is now crashing into the foundational security problem of whom, or what, you can trust with your keys.
Illia Polosukhin, Bankless:
- When you use Entropic OpenAI, or even worse, you use something else for inference, OpenClaw actually sends all your secrets to those services as well.
- Somewhere in Entropic and OpenAI logs, they have everybody's access keys, API keys, and bearer tokens to access your Gmails and your Notions.

