04-12-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Bittensor crisis proves decentralized AI needs new hardware and governance

Sunday, April 12, 2026 · from 3 podcasts
  • A major Bittensor exit shows founder control can still cripple 'decentralized' AI networks.
  • Enterprise AI fails because 93% of budgets go to tools, only 7% to training humans.
  • Next-generation superconducting hardware could break the cloud's monopoly on intelligence.

Bittensor’s governance cracked this week when a major subnet operator, Covenant AI, pulled its three subnets, accused co-founder Jacob Steves of sabotage, and triggered a 15% price drop. On This Week in Startups, Jason Calacanis called it a “trip and fall” but acknowledged the core tension: anonymous, global talent networks depend on founder-run infrastructure. The protocol now needs staking mechanisms to prevent future “rug pulls.”

“If you randomly set one weight to nine billion, you don't get a better model; you get junk.”

- Vitalik Buterin, The a16z Show

The Bittensor drama is a microcosm of a larger failure. According to The AI Daily Brief, companies are dumping 93% of their AI budgets into infrastructure and models, leaving just 7% for training the humans expected to use them. A KPMG study found 75% of CEOs admit their AI strategy is “more for show,” while 44% of Gen Z workers actively sabotage corporate AI tools they find useless.

This human failure coincides with a Wall Street reassessment. Fears that AI would destroy incumbent SaaS giants have faded. AWS CEO Matt Garman argued on The AI Daily Brief that deep domain knowledge gives existing firms an edge, not a death sentence. The cybersecurity sector, in particular, is seen as a winner as AI expands the attack surface.

The real bottleneck to decentralization isn't software but physics. On The a16z Show, Guillaume Verdon argued that as long as state-of-the-art AI requires hundred-kilowatt data centers, power will stay centralized. His company, Extropic, is betting on superconducting hardware to achieve a 10,000x efficiency gain. Vitalik Buterin agreed, advocating for “verifiable hardware” with cryptographic attestation to prevent surveillance.

“Acceleration without intentionality breaks the delicate structures that make human life valuable.”

- Vitalik Buterin, The a16z Show

The path forward requires new hardware to enable personal sovereignty and new governance to manage human incentives. Bittensor’s stumble is the first test of whether decentralized AI can build systems robust enough to survive its own founders.

Mentioned:A16ZAmazonKPMG

Source Intelligence

What each podcast actually said

Bittensor Drama! TAO down 15%! | E2274Apr 11

  • Vidio's technology can reduce video file sizes by 60% with no perceptual quality loss, offering cost efficiencies for storage and content delivery networks, especially vital for low-connectivity markets like Africa. The video upscaling market is projected to grow from $175 million in 2025 to $1.1 billion by 2032, with video comprising 85% of internet traffic.
  • Jason highlights Bittensor's permissionless nature allows global tech talent, like a Vietnamese student team, to contribute to subnets and earn Tao anonymously, bypassing traditional hiring, visa, or payment frictions. This empowers a global workforce to compete on best price and service, fostering unconstrained free markets.

Also from this episode:

Protocol (1)
  • Bittensor operates on a distributed network with 128 subnets, similar to Bitcoin, designed for deflationary services through competition, with one example being a coding co-pilot. Jason has invested in the Tao token and its subnets.
AI & Tech (7)
  • Covenant AI (subnets 3, 39, 81), led by Sam Dar, developed a 72-billion parameter decentralized AI model, Templar, which initially boosted Tao's price but later claimed Bittensor was not truly decentralized. Covenant AI accused co-founder Jacob Steves of blocking operations by suspending subnet emissions and depreciating infrastructure.
  • Jason notes that Bittensor needs robust governance to prevent "rug pulls" and bad actors, proposing a system where subnets stake collateral (like a franchise) to balance ownership with preventing token theft. He anticipates future improvements will solidify handling such incidents.
  • Gareth Howles's Vidio (Subnet 85), incubated by Talstat's Moog, offers video processing services like compression, upscaling, and optimization for archives (e.g., BBC, Getty Images) and streaming. Vidio uses AI agents to enhance video quality, convert formats, and add metadata, leveraging a "winner takes all" model where miners provide and optimize AI models.
  • Ola Layman developed an "LLM council" skill using Claude Opus 4.6 with five distinct personas, inspired by Andrej Karpathy's concept of anonymized, peer-reviewed LLM responses. This tool assists non-technical users with business and life advice, exemplified by its detailed recommendation for engineering VP equity in a seed-stage startup.
  • Jason offered a $1,000 bounty for an OpenClaude skill by May 1st that can generate "enhanced show notes," drawing a parallel to the "demo or die" ethos of the Homebrew Computer Club, founded in Menlo Park in 1975 by figures like Steve Wozniak.
  • Ola Layman described Claude Mythos as "Hiroshima for software" due to potential advanced capabilities, emphasizing the critical need for individuals to implement basic security measures in an uncertain AI landscape. Ola is a German founder based in Cyprus, attracted by its 12.5% corporate tax rate compared to Germany's approximately 50%.
  • Jason advocates for the $3,500 14-inch MacBook Pro with 48GB RAM for running local LLMs, while Alex highlights the $600 2.7-pound MacBook Neo as a strategic move by Apple to capture the Chromebook market. The Neo, despite feeling "cheap," aims to bring new users into the Apple ecosystem for future services.
Markets (1)
  • Following Covenant AI's claims, Tao's market cap declined to $2.93 billion, with its price dropping from approximately $335 to $271, a significant but not catastrophic loss. Gareth Howles suggested investor fear and Sam Dar's token sales, not a fundamental system flaw, primarily drove the price drop.
Culture (2)
  • Jason recommends Disney's animated "Maul" series, noting its unique watercolor-influenced, cyberpunk animation style and its role in re-establishing George Lucas's original vision for Episodes 7, 8, and 9. He praises it as an attempt to rectify the "disastrous" sequels under Kathleen Kennedy and J.J. Abrams.
  • Jason recommends "Designer's Guide to Creating Charts and Diagrams" by Nigel Holmes (1983/1984), citing him as the "godfather of infographics," alongside "My Life in Advertising" and "Scientific Advertising" by Claude Hopkins, for timeless marketing inspiration. Alex recommends the science fiction novels "Hyperion" and "The Kingdom Trilogy" by Bethany Jacobs.

Why Enterprise AI Has a Leadership ProblemApr 10

  • The narrative of AI disruption impacting incumbent SaaS companies is fading on Wall Street, with initial fears that caused software indices to sell off by 20% now replaced by optimism.
  • AWS CEO Matt Garman dismissed claims that AI coding tools like Claude Code would disrupt major SaaS firms, arguing AI presents a significant opportunity for existing companies to build next-generation products due to their deep domain knowledge.
  • Goldman Sachs analyst Peter Oppenheimer believes the worst is over for tech stocks, citing opportunities created by their valuation relative to expected growth falling below the global aggregate market, following one of the weakest performances in 50 years.
  • Anthropic's recent tender offer saw few employees cashing out, indicating optimism that the company's value will continue to rise towards an anticipated IPO, despite some secondary markets valuing stock as high as $600 billion.
  • Anthropic is actively poaching top talent, hiring Eric Boyd, an 18-year Microsoft veteran and former Azure AI hardware/software lead, as its head of infrastructure to manage surging demand and lead a new team of cloud enterprise veterans.
  • Anthropic sealed a deal with Google and Broadcom to build 3.5 gigawatts of dedicated inference capacity starting next year, shifting from outsourcing infrastructure to taking a more active, in-house management role.
  • Elon Musk amended his lawsuit against OpenAI, requesting the judge unwind the company's for-profit conversion and remove Sam Altman and Greg Brockman from the non-profit board, clarifying he seeks no monetary damages for himself but for the non-profit.
  • Intel has partnered with Tesla and SpaceX on the TerraFab facility in Austin, Texas, to produce domestic AI chips, aiming for one terawatt per year and positioning it as the world's largest fab, with Intel overseeing crucial manufacturing steps.
  • Approximately 93% of all enterprise AI spending goes to infrastructure, models, compute, and tools, with only 7% invested in the humans using these technologies, creating a recipe for disaster in AI adoption and value realization.

Also from this episode:

AI & Tech (7)
  • The cybersecurity sector is an area where AI disruption fears were overblown; analysts like Manthan Shah and Rob Owens argue AI will increase the attack surface, creating a multi-billion dollar opportunity and compounding the need for security, rather than reducing budgets.
  • A16Z research indicates 19% of Global 2000 companies and 29% of Fortune 500 are live-paying customers of leading AI startups, with coding, support, and search dominating enterprise AI adoption, and tech, legal, and healthcare leading industry uptake.
  • KPMG's quarterly survey shows average anticipated AI spend among companies with over $1 billion in revenue jumped from $114 million to $207 million over the past year, reflecting the rapid increase in agent deployment from 11% to 54% of organizations.
  • KPMG's data reveals an increasing concern over AI risks, with cybersecurity and employee misuse cited by 44% of executives as the most difficult societal challenge by 2030, up from 32%.
  • Organizations are prioritizing internal talent development for AI skills, with 87% focusing on upskilling/reskilling current employees, 68% hiring for new roles like AI architects, and 55% redesigning existing roles.
  • A Writer study found 73% of CEOs experience stress or anxiety from their company's AI strategy, with 61% fearing job loss if they fail to lead the AI transition, highlighting a significant leadership problem exacerbated by 39% lacking a formal AI revenue strategy.
  • Employee sabotage poses a serious threat to AI strategies, with 29% of employees (44% of Gen Z) admitting to it, and two-thirds of executives believing their company has suffered a data leak or security breach due to unapproved AI tool use.
Enterprise (2)
  • A significant leadership gap exists, with only 35% of employees viewing their manager as an AI champion, and 75% trusting AI more than their manager for certain work tasks, contributing to a two-tier workplace where 92% of C-suite cultivate an "AI elite."
  • The State of Digital Adoption report from WalkMe identified a 52-point trust gap between executives and employees regarding AI for complex decisions (61% vs. 9%) and a 67-point gap on having adequate AI tools (88% vs. 21%).

Who Controls AI Acceleration? Vitalik Buterin and Guillaume Verdon DebateApr 9

  • Buterin defines Defensive/Decentralized Acceleration (DIAC) as a path that accelerates technology while managing unipolar and multipolar risks, including permanent AI dictatorship.
  • Both Verdon and Buterin champion open source and open hardware to diffuse AI power, preventing a dangerous capability gap between individuals and centralized entities.
  • Verdon's company, Extropic, is developing superconducting hardware he claims will be 10,000x more energy efficient, aiming to densify intelligence and make it personally ownable.
  • Buterin advocates for verifiable hardware with cryptographic attestation, allowing surveillance cameras to prove they only detect violence and nothing else.
  • Buterin suggests hardware restrictions could be a feasible, non-dystopian way to manage AI pace, citing that Taiwan produces over 70% of all chips.
  • Verdon argues delaying AI via chip restrictions is geopolitically unenforceable, as competing nations would outproduce, and alternative computing will bypass the bottleneck.

Also from this episode:

AI & Tech (6)
  • Vitalik Buterin frames technological acceleration as a century-long reality, accelerated by cycles of world wars, postmodernism, and shattered beliefs.
  • Verdon says EAC emerged in 2022 as a counterculture to AI doomerism, which he calls a weaponization of anxiety for political purposes.
  • Buterin warns accelerating any single parameter indiscriminately, like an LLM weight, risks destroying all value in a complex system like society.
  • Buterin sees cryptocurrency as a crucial coupling mechanism, providing a shared property rights system for commerce between humans, AI, and hybrid entities.
  • Verdon envisions a 10-year optimistic future with personalized AI as a cognitive extension we own, and a billion-year future of biosynthetic hybrids exploring the stars.
  • Buterin's pessimistic 10-year scenario involves over-centralization of AI power and a collapse in cultural and technological variance, which he calls entropy collapse.
Science (3)
  • Guillaume Verdon defines Effective Accelerationism (EAC) as the observation that systems self-organize to capture free energy and dissipate heat.
  • Verdon argues EAC is a meta-cultural prescription for maximizing ascent on the Kardashev scale, claiming it grants higher fitness to those who adopt it.
  • Vitalik Buterin explains entropy as subjective ignorance about a system, using a gas analogy to show entropy increases when hot and cold mix.