AI assistants have amnesia. They forget yesterday's conversation, forcing users to re-explain their work with each new prompt.
Brian Murray and Paul Itoi discussed this fundamental failure on TFTC. Murray described the daily ritual of reloading context into his AI assistant just to have a coherent exchange. The problem isn't raw intelligence. Top models can parse complex requests. The failure is in memory, treating every prompt as an isolated event and making the user the context manager.
The industry has focused on scaling language models, pouring capital into predicting the next word. Itoi argues this is misdirection. People anthropomorphize LLMs because they speak our language, but they are not reasoning. The real breakthrough comes from tools that remember. Graph databases like Neo4j or note-taking systems like Obsidian create a persistent knowledge web, enabling connection over recollection.
Building that persistent memory requires unprecedented physical infrastructure, and the search for it is hitting real-world constraints. Cities like Tucson, Arizona are rejecting gigawatt-scale data center projects over water and energy concerns. Philip Johnston of Aethero argues the solution is to move the problem to space, where continuous solar power and vacuum cooling bypass terrestrial limits.
Reusable rockets like SpaceX's Starship could drop launch costs to $500 per kilogram, making orbital solar cheaper than ground-based farms. The first test launches next week, sending an Nvidia H100 GPU to space as a proof-of-concept for a five-gigawatt orbital cluster.
Back on Earth, the financial war for chips is equally fierce. Dylan Patel of SemiAnalysis explained on the Dwarkesh Podcast that Big Tech's $600 billion capex funds compute years in advance. AI labs need capacity now. OpenAI's early, aggressive deal-making locked in cheaper capacity. Anthropic, taking a more conservative financial stance, now hunts for spare compute at premium prices, paying $2.40 per H100 hour versus a $1.40 build cost.
While the physical race accelerates, the software side is democratizing. Andrej Karpathy released Auto-Research, a simple tool that lets an AI model iterate on its own code in five-minute loops. Shopify CEO Tobi Lütke used it over a weekend, boosting a model's performance by 19% with no prior machine learning background.
On This Week in Startups, Jason Calacanis framed this as the dam cracking. The elite circle of AI PhDs is about to be flooded by hundreds of thousands of new tinkerers. If public tools yield gains, private labs are likely moving twice as fast. The bullish take is that 2026's pace of AI improvement could be insane.
This cultural moment is split. In China, OpenClaw meetups thrive and governments incentivize adoption. In the U.S., a recent NBC poll shows only 26% of Americans are pro-AI, with 46% opposed. The race isn't just about building smarter machines. It's about who builds them, where they get the power, and whether society will embrace the outcome.
Paul Itoi, TFTC: A Bitcoin Podcast:
- I think people anthropomorphize LLMs a lot.
- Because it's speaking language to you, because you can talk to it, you think that it's actually reasoning.




