03-15-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

AI Self-Improves, Unleashing New Builders

Sunday, March 15, 2026 · from 2 podcasts, 4 episodes
  • AI models can now improve their own code, rapidly democratizing development and empowering non-coders, which indicates an explosive pace of future advancement.
  • Despite these gains, current AI tools struggle with memory, forcing users to constantly reload context, a problem graph databases aim to solve.
  • This new AI capability creates a greenfield opportunity for agentic payments, where Bitcoin's community has a unique chance to build the default protocol.

AI models are beginning to write their own future.

Andrej Karpathy, a former researcher at Tesla and OpenAI, recently released Auto Research, a simple open-source tool. It enables an AI model to act as an agent, tasked with improving its own code in five-minute training loops, testing changes, and retaining what works.

On This Week in Startups, the implications were clear. This isn't the full recursive self-improvement leading to superintelligence. It is a proof of concept, and it works. Shopify CEO Tobi Lütke, without a machine learning research background, used it over a weekend to achieve a 19% performance improvement in a small model.

This rapid accessibility signals a massive democratization. The pool of capable tinkerers is expanding from thousands of AI PhDs to hundreds of thousands of general builders. As Jason Calacanis argued on This Week in Startups, this represents the dam cracking, shifting power from developers to everyone.

While AI's ability to build is growing, its memory remains a core frustration. On TFTC, Brian Murray and Paul Itoi discussed the daily ritual of manually reloading context for AI assistants. These systems treat each prompt as an isolated event, failing to retain information between sessions.

Paul Itoi pointed to graph databases like Neo4j as a potential solution, creating persistent knowledge webs to allow machines to reference past conversations and code. He noted that the industry's focus on scaling language models has obscured the need for true reasoning and practical integration, as people often anthropomorphize LLMs without understanding their statistical nature.

This new building capability also creates a vast opportunity in agentic payments. Matt Corallo argued on TFTC that the latest AI models can build functional Bitcoin applications without coding. More critically, he highlighted that existing payment networks are ill-suited for autonomous AI agent transactions, meaning everyone, from Google to Bitcoin, starts from zero in this race. The Bitcoin community has a unique chance to build a default agent payment rail.

Jason Calacanis, This Week in Startups:

- This is the dam cracking from the developers owning the world to everybody building the future.

- And I'm here for it.

Entities Mentioned

Claudemodel
Google AntigravityProduct
ObsidianProduct
StripeCompany
VisaCompany

Source Intelligence

What each podcast actually said

#726: Mapping The Mind Of The Machine with Brian Murray & Paul ItoiMar 14

  • Paul Itoi argues the industry has misdirected capital into scaling language models for better word prediction, while the real breakthrough for AI assistants will be systems that can remember past conversations and information.
  • Brian Murray describes a daily frustration where AI assistants fail to retain context between sessions, forcing users to manually reload information about their projects and workflows for every new interaction.
  • Paul Itoi states that people anthropomorphize large language models because they communicate in natural language, but they are statistical engines without genuine reasoning or understanding.
  • Graph databases, such as Neo4j, and connected-note systems like Obsidian are emerging as potential solutions to the AI memory problem by allowing machines to create and reference a persistent web of related information over time.
  • The core failure of current top models like Claude is not raw intelligence but a lack of long-term memory, which treats each user prompt as an isolated event and undermines their utility as assistants.
  • Brian Murray's team has automated podcast post-production using Claude to extract quotes and identify trends from transcripts, but even this advanced pipeline requires constant manual context management.
  • Paul Itoi advocates for a shift in AI development focus from raw language processing to practical integration, building systems that can operate within a complete historical record of a user's work and decisions.
  • The target for next-generation AI is achieving a flow state in work, where an assistant can instantly reference past code, conversations, and decisions, eliminating the need for manual context reloading.

#723: The Battle for the Agentic Economy with Matt CoralloMar 8

  • Matt Corallo argues that recent AI models like Claude 3.5 have crossed a threshold in the last three months, enabling the creation of functional software, from front ends to mobile apps, without human coding.
  • According to Matt Corallo, this leap in AI model quality removes the technical skill barrier for the Bitcoin community, allowing anyone with an idea and the will to execute to build Bitcoin applications.
  • Matt Corallo says the emerging agentic economy presents a major opportunity for autonomous AI payments, where agents will handle routine purchases like reordering household supplies, representing a genuine slice of future consumer spend.
  • Matt Corallo states that legacy payment networks like Visa are useless for agentic commerce, as their systems are fundamentally anti-bot by design to prevent fraud.
  • Matt Corallo argues the race to build the default payment rail for AI agents is wide open, with entities like Google, Stripe, Visa, and crypto projects all pushing competing protocols from a starting point of zero.
  • According to Matt Corallo, this represents a unique shot for Bitcoin to achieve mainstream merchant adoption, as it is not trying to displace a 10x better incumbent but is competing in a newly forming market.

Also from this episode:

Payments (1)
  • Matt Corallo notes that stablecoins also fail to serve the agentic payment need due to a lack of merchant integration and usability for automated transactions.
Coding (1)
  • Matt Corallo concludes that winning the agentic payment protocol war requires the Bitcoin community to step up and build, using the newly available AI tools to turn weekend ideas into working products.

How agents will change banking forever | E2260Mar 10

  • Andrej Karpathy's Auto Research open-source tool proves AI models can already iterate and improve their own code within simple five-minute training loops.
  • Calacanis and Wilhelm note that while this isn't the full recursive self-improvement loop towards superintelligence, it is a working proof of concept for core autonomous improvement mechanics.
  • The tool's key impact is massive democratization. Shopify CEO Tobi Lütke, without an ML background, used it to run 37 experiments and find a 19% performance improvement in a small model over a weekend.
  • Jason Calacanis argues this shifts the landscape from a small elite of AI PhDs to hundreds of thousands of new tinkerers, moving 'from the developers owning the world to everybody building the future.'
  • Public tinkering experiments like these serve as a leading indicator that private labs at companies like OpenAI, Anthropic, and xAI are likely iterating at significantly faster rates.
  • The show's bullish prediction is that this acceleration sets up 2026 for potentially 'insane' rates of overall AI advancement and capability improvement.
  • Calacanis highlights a cultural split, noting Chinese governments are incentivizing AI adoption while a recent NBC poll shows only 26% of Americans are pro-AI, with 46% opposed.

How agents will change banking forever | E2260Mar 10

  • Andrej Karpathy's Auto-Research tool enables an AI model to iteratively test and improve its own code in five-minute cycles, demonstrating a basic mechanic of self-improvement.
  • Shopify CEO Tobi Lütke used Auto-Research to run 37 experiments over eight hours, boosting a model's performance score by 19%, despite having no machine learning research background.
  • Jason Calacanis predicts AI tool democratization will expand the pool of people capable of improving models from roughly 3,000 highly-paid PhDs to hundreds of thousands of tinkerers.
  • Calacanis argues that elite AI labs are likely advancing similar self-improvement techniques at a pace twice as fast as the public tools indicate.
  • A recent NBC poll found only 26% of Americans view AI positively, with 46% opposed, indicating lagging public enthusiasm compared to technical progress.
  • The hosts contrast US skepticism with Chinese AI enthusiasm, where OpenClaw meetups draw crowds and local governments offer adoption incentives, driven by aspirational culture and tangible career utility.

Also from this episode:

Enterprise (1)
  • The barrier for non-technical executives to directly tinker with AI training loops has collapsed, foreshadowing tension with developers who prefer keeping management away from the codebase.