04-17-2026Price:

The Frontier

Your signal. Your price.

BUSINESS

Anthropic trades safety caution for corporate lock-in

Friday, April 17, 2026 · from 6 podcasts
  • Anthropic’s $30B revenue explosion proves enterprise AI demand isn't theoretical.
  • Its 100-day safety pause for the Mythos model creates a market for zero-day vulnerability patches.
  • The company squeezed the open-source agent project OpenClaw before launching its own competing managed agents.

Anthropic’s revenue run rate hit $30 billion this month. Brad Gerstner reported the company added more in March than Databricks and Palantir’s combined annual revenue. David Sacks argues this justifies the industry’s massive data center bets. The bubble narrative died.

The company’s newfound scale is paired with a gatekeeping strategy. Anthropic restricted its ‘Mythos’ model to 40 companies through Project Glasswing for a 100-day security review. The model reportedly found a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFMPEG missed by millions of automated scans. Brett Winton at ARK Invest calls the delay a marketing tactic. He notes many of the same exploits are detectable by GPT-4. The safety narrative papers over compute shortages and creates a premium market for the cure.

“The logic is simple: tell the world a tool is too dangerous for general release, then charge a premium for the cure.”

- Brett Winton, FYI - For Your Innovation

Chamath Palihapitiya dismisses the threat as theater. He argues sophisticated hackers can already use existing models like Opus to achieve similar results. David Sacks grants the cyber risk is legitimate but notes Anthropic has a pattern of coupling product releases with scare tactics, citing a 2024 blackmail study they prompted over 200 times to get a desired result.

The company’s commercial moves are aggressive. Days before launching its own ‘managed agents,’ Anthropic forced the open-source project OpenClaw off flat-rate subscriptions onto expensive metered APIs. Jason Calacanis views this as an anti-competitive move to ankle the leading open-source agent project. He predicts open-source agents will capture 90% of token usage and undercut proprietary models.

“Anthropic effectively ‘ankled’ OpenClaw, the most successful open-source agent project on GitHub.”

- David Sacks, All-In

The corporate panic is real. Treasury Secretary Bessent and Fed Chair Powell summoned bank CEOs for an emergency meeting last week citing the Mythos threat. Marty Bent suggests this was a red herring. The real topic was likely the $1 trillion hole in the private credit market, where insurance companies face a liquidity crisis worse than 2008.

The market is now a game of musical chairs played with H100 GPUs. Brett Winton says compute determines market share. OpenAI holds a hardware advantage and can release models broadly. Anthropic’s throttled access forces users to try competitors. Meta looms as a formidable competitor because its advertising business lets it deliver consumer AI without directly monetizing the model.

The Mythos saga reveals the new rules. Superior product matters, but controlling the compute supply to fulfill demand matters more. Revenue is a tool to secure future silicon. Anthropic’s play isn't about safety. It's about securing a seat at the table.

Source Intelligence

- Deep dive into what was said in the episodes

Podcasting 2.0
Podcasting 2.0

Adam Curry

Episode 257: Slop FactoryApr 17

  • Dave Jones blocked abusive traffic hitting the Podcast Index's unauthenticated PubNotify API after Fountain was pinged millions of times daily by a bot, creating a 500,000-podcast backlog in the aggregation queue.
  • The Podcast Index infrastructure handled 9.2 million API requests and 322GB of data in 24 hours, and 75 million requests and 2TB of data over the last seven days.
  • Adam Curry advises podcasters to trademark their show names and use the Lanham Act for legal action against impersonators, as it specifically covers brand impersonation and cloned content designed to cause consumer confusion.
  • Dave Jones is developing an AI spam classifier using a Gemma model via Llama.cpp on a Mac Mini to scan new podcast feeds, marking them as 'bad' or 'good' based on metadata to help hosting companies combat abuse.
  • Dave Jones plans to create a second downloadable SQLite database of all Podcast Index feeds, including those marked dead with reason codes, to serve as a dataset for fine-tuning a future spam detection model.
  • The Podcast Index's main database server is at 75% disk capacity, prompting a planned upgrade from a $192/month VM to a $384/month instance with double the RAM and storage.
Also from this episode: (4)

AI & Tech (4)

  • Adam Curry identifies two primary motives for AI slop podcasts: generating ad revenue from dynamic ad insertion and executing SEO spam campaigns, often for local businesses or scams like black magic services.
  • Hosting companies' free trial periods are a major vector for abuse, as scammers create placeholder podcasts for SEO across multiple platforms. Dave Jones suggests hosts should deny web presence to trial accounts and mark them clearly in feeds.
  • Dave Jones states the Podcast Index has no explicit content rules beyond keeping the platform free and open, arguing that blocking structurally abusive feeds is self-defense for the ecosystem, not censorship.
  • A major GitHub Actions breach in March, where a compromised security scanner injected info-stealers into builds, led to the theft of Cisco's entire private code repository.

AI's Great DivergenceApr 16

  • Anthropic has restricted its 'Mythos' model to about 40 partners for limited cybersecurity testing, reflecting a trend of staggered rollouts due to security risks. OpenAI is pursuing a similar rollout strategy for its new model.
  • Z.ai leader Lu claims agents could do about 20 steps by the end of last year, but GLM 5.1 can now do 1,700. The model's autonomous work time is cited as a critical new performance curve.
  • Anthropic released Claude Managed Agents to close a notable gap between model capability and business application, as argued by head of product Angela Jiang. The platform bundles an agent harness with production infrastructure, aiming to reduce engineering overhead.
  • Claude Managed Agents enables scheduled, event-triggered, and long-horizon tasks. It abstracts self-hosting complexity, but lacks persistent memory across sessions, making it best suited for discrete, transactional operations.
  • Google introduced 'notebooks in Gemini', integrating Notebook LM's resource management directly into the app. Google's Josh Woodward positions this as building 'a second brain' beyond basic AI chatbot projects.
  • GLM 5.1 was trained entirely on less powerful Huawei chips, demonstrating China's hardware stack can produce powerful results. Its release two months after US leaders suggests the US lead over Chinese rivals is only a few months.
Also from this episode: (6)

Models (4)

  • Meta's new Muse Spark is a natively multimodal reasoning model designed primarily for personal agents, not enterprise use. The model supports tool use, visual chain-of-thought, and multi-agent orchestration.
  • On benchmarks, Muse Spark scored 52.4 on SweetBench Pro for coding, placing it near top models. It excels in visual comprehension, scoring a state-of-the-art 86.4 on CharViC's reasoning, beating Gemini 3.1 Pro by 6 points.
  • Z.ai's open source GLM 5.1, a 754B parameter model, outperforms leading Western models on coding benchmarks with a 58.4 SweetBench Pro score. The model demonstrates long-horizon task capability, completing an eight-hour autonomous Linux desktop build.
  • Ethan Mollick notes Muse Spark is fine but doesn't match the big three models, displaying some strange language and looseness with facts. François Chollet criticizes Meta for over-optimizing for benchmarks at the expense of actual usefulness.

Agents (1)

  • Mark Zuckerberg positions Muse Spark for personal use areas like visual understanding, health, and social content. He frames it as a shift from assistant AI to agentic AI, enabling it to 'do things for you' like creating mini-games or troubleshooting appliances.

Big Tech (1)

  • Alexander Wang of Meta responded to criticism by saying the lab is open to feedback and is upfront about the model's weaknesses, such as low performance on the ARB GI 2 benchmark.

Mythos And AI Safety | The Brainstorm EP 127Apr 15

  • Anthropic is restricting access to its new AI model Mythos for 100 days, offering it only to the top 40 companies through Project Glasswing so they can patch zero-day vulnerabilities the model discovered.
  • Brett interprets Anthropic's Mythos release as a marketing and supply tactic, not genuine safety, arguing it's meant to induce enterprises to pay for early access to fix their code while the company is compute-constrained.
  • Claude's consumer usage is catching up to ChatGPT, which Brett attributes to workplace adoption spilling over into personal use as people recognize its power.
  • The core strategic debate is whether winning in AI depends on having the best product or controlling the compute supply needed to build the best product.
  • Nick argues product and distribution ultimately win in AI, citing Cohere's enterprise success based on product fit rather than model capability.
  • Consumer AI use cases have changed little in three years despite model improvements, while enterprise use has diversified as workers actively seek tools to lighten their workloads.
  • On the enterprise side, Brett argues market share will stabilize around compute supply because if a provider like Anthropic signs too many customers and lacks capacity, customers will churn to a competitor.
Also from this episode: (7)

AI & Tech (7)

  • Brett says third-party tests have shown many software exploits detected by Anthropic's Mythos can also be found by GPT-5.4, undermining claims of Mythos's unique vulnerability-finding capability.
  • ARK's analysis positions Mythos as materially better at software engineering benchmarks, advancing performance they expected a year from now to today, but the 100-day delay reduces that lead to an 8-month advantage.
  • OpenAI is rumored to have a similarly performant model developed over two years that it will release broadly because it currently has more abundant compute than Anthropic.
  • Brett argues AI companies make allocation decisions between training, enterprise service, and consumer business to maximize valuation ahead of a public market entry, securing capital for future compute.
  • Nick sees Meta as a formidable competitor in AI because its advertising business lets it deliver a consumer experience without directly monetizing the model, and it doesn't have to sell compute to others.
  • Brett notes OpenAI invests more in model training and has better medium-term compute access than Anthropic, per public reports, which affects their product roadmaps.
  • The group discusses a concept for a new trust-based social network where AI agents interact only with agents of vetted contacts, arguing current algorithmic social media adulterates real friendship.

SNL #219: Killing SatoshiApr 13

  • The hosts discuss a New Yorker article characterizing Sam Altman as dishonest, citing his firing from OpenAI's board and claims of misleading Anthropic's founder about AI safety commitments.
  • Anthropic is working with 40 companies through 'Project Glasswing' to test its new AI model, Mythos, for cybersecurity vulnerabilities before a public release.
  • Keon sees the open-agents movement, where people sell compute for Bitcoin, as a bullish counterbalance to centralized AI power and a potential defense against models like Mythos.
Also from this episode: (9)

War (1)

  • Keon discusses a story about an F-15E Strike Eagle aircraft with two airmen being shot down over Iran.

Mining (3)

  • Dan, a Bitcoiner in Iceland, shares his experience with a home Bitcoin mining heater called the Open Two from a company called 21 Energy.
  • Dan reports his mining unit achieved 43 terahash per second but was too loud, and that his total household power consumption was nearly 4,000 kilowatt hours over three months at a cost equivalent to $681.
  • Dan earned 115,000 sats, worth about $80, from his mining heater over the same period, projecting a 26-month payback period for the device.

Adoption (1)

  • NeedCreations launched btcedu.app, a Bitcoin education archive where users can earn points and withdraw 100 sats after accumulating 1,000 points.

Protocol (3)

  • Keon cites Brian Quintin's Myers-Briggs survey showing Bitcoiners heavily skew toward INTJ (34%) and INTP (22%) personality types, diverging significantly from the general population.
  • Aardvark proposes a quantum-safe Bitcoin transaction scheme using Lamport signatures, which results in a 10,000-byte script size and requires 150 dummy signatures with hash commitments.
  • The hosts discuss the upcoming movie 'Killing Satoshi,' directed by Doug Liman and starring Pete Davidson, Casey Affleck, and Gal Gadot, which fictionalizes an investigator trying to expose Bitcoin's creator.

AI & Tech (1)

  • The hosts express concern that Mythos could find zero-day vulnerabilities in critical open-source software, including Bitcoin Core, posing a significant security threat if capabilities are locked away.

Ten31 Timestamp: You Say Ceasefire, and I Say EscalationApr 13

  • Anthropic's Mythos AI model is presented as a significant step function improvement, with reports of it finding zero-day bugs in critical software, prompting national security concerns and government attention.
  • Marty highlights warnings from the Treasury about private equity and credit exposure for insurance companies, identifying a potential 'trillion-dollar hole' as a slow-moving liquidity crisis.
  • An AMBEST report indicates annuity-selling insurance funds are in a significantly worse financial position than before the 2008 crisis due to private credit exposure.
Also from this episode: (6)

War (1)

  • Marty Bent notes US Navy blockaded Iranian ports in the Strait of Hormuz, following brief talks between JD Vance and an Iranian faction, leading to oil market escalation.

Markets (1)

  • John highlights a map from Rory Johnson showing a significant redirection of Very Large Crude Carriers (VLCCs) to the US Gulf, indicating a shift in oil market leverage towards the US amid global artery closures.

Trade (1)

  • China is curbing sulfuric acid exports starting in May, responding to perceived US leverage and potential disruption to metal processing, phosphate fertilizers, and fibers.

BTC Markets (2)

  • Marty and John observe Bitcoin's relative strength, trading around $71,800, acting as a risk-off asset during geopolitical and financial uncertainty, contrary to past liquidity crises.
  • John suggests a fractured, multipolar global order, where just-in-time supply chains falter and trust diminishes, creates an ideal environment for Bitcoin as a neutral, sovereign store of value.

Politics (1)

  • John theorizes the urgent meeting of Wall Street leaders with Treasury and Fed officials, ostensibly about Mythos' cybersecurity risks, might be a 'red herring' to discuss broader systemic financial issues.

Anthropic's $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's InfluenceApr 10

  • Brad Gerstner credits Anthropic for choosing to sandbox Mythos rather than release it, establishing Project Glass Wing, a 100-day coalition with 40 companies including Apple and JP Morgan to preemptively find and patch vulnerabilities. He argues this self-regulation shows market forces can coordinate with government without top-down mandates.
  • Anthropic cut off OpenClaw's access to flat-rate subscriptions, forcing users to its more expensive API, shortly before launching its own competing agent technology. Jason Calacanis views this as an anti-competitive move to ankle the leading open-source agent project.
  • Chamath Palihapitiya contends AI-generated code is still marginal for core enterprise systems, citing customers who rely on 60-year-old COBOL programmers and stating the long-horizon ability of models to build enterprise-grade software is still poor.
  • Anthropic's revenue run rate exploded from $1B at end of 2024 to $30B by April 2025, driven by over a thousand enterprises paying over $1M annually. Brad Gerstner calls it the largest revenue explosion in tech history, evidence of a near-infinite TAM for intelligence.
  • Brad Gerstner states Anthropic and OpenAI are not gross margin negative; inference costs have plummeted 90% year-over-year and their small headcounts (2,500 at Anthropic) could lead to 'accidental profitability' as revenue outpaces compute spend.
  • David Sacks frames Anthropic's revenue explosion as justification for Silicon Valley's massive AI infrastructure bets, countering early-2025 bubble narratives and proving the foundational bet on intelligence scaling was correct.
  • The hosts debate where AI value will be captured. Sacks notes it's expanded from chips to hyperscalers to models, questioning if the application layer will be eaten by model companies or see its own explosion, citing Palantir as an early turbocharged example.
Also from this episode: (8)

AI & Tech (4)

  • Anthropic's new model Mythos autonomously discovered thousands of critical vulnerabilities, including a 27-year-old bug in OpenBSD firewalls and a 16-year-old bug in FFMPEG missed by 5 million automated scans.
  • David Sacks notes Anthropic has a pattern of coupling product releases with scare tactics, citing a 2024 blackmail study they prompted over 200 times to get a desired result. However, he grants the cyber risk from advanced coding models is likely legitimate and requires a pre-release patching period.
  • Chamath Palihapitiya dismisses the Mythos threat as theater, arguing sophisticated hackers could achieve similar exploits today with Opus and that truly patching all vulnerabilities would require shutting down the internet for years.
  • Jason Calacanis argues open-source models and agents like OpenClaw represent the biggest competitive threat to frontier AI companies, predicting they will capture 90% of token usage and undercut proprietary models.

War (1)

  • Regarding the Iran ceasefire, David Sacks praises the two-week pause and upcoming Islamabad talks as crucial to de-escalation, giving Trump credit for negotiating a halt to a conflict prone to dangerous escalation ladders.

Markets (1)

  • Brad Gerstner cites market resilience during the Iran conflict, with only a 5-7% drawdown on indices, as evidence investors trust Trump's 'destroy capabilities and get out' doctrine and see upside if Middle East and Ukraine deals are finalized.

Israel (1)

  • Chamath Palihapitiya argues Israel should be concerned about losing America as a steadfast ally if it doesn't help find a swift off-ramp, noting American public sentiment is turning against perceived Israeli over-influence on U.S. foreign policy.

Social Media (1)

  • Jason Calacanis highlights X's auto-translate feature as a transformative truth mechanism, enabling real-time, nuanced cross-border dialogue in languages like Japanese, Hebrew, and Russian that journalists often don't cover.