
Anthropic grew its annual recurring revenue from $1 billion to $19 billion in 14 months, maintaining a 10x year-over-year growth rate.
Anthropic's growth team is structured into horizontal groups like growth platform and monetization, and vertical audience-focused pods for B2B, Claude Code, knowledge workers, and API growth.
The team dedicates roughly 70% of its effort to firefighting 'success disasters' caused by rapid scaling and 30% to proactive growth strategy and optimization.
Amole argues that in AI-first companies, growth strategy should skew toward large, transformative bets rather than small optimizations because future product value grows exponentially.
Anthropic's 'CASH' initiative uses Claude to automate growth experimentation by identifying opportunities, building features, testing, and analyzing results, achieving a win rate comparable to a junior PM.
Anthropic prioritizes activation flows with intentional friction, asking users about their interests to route them to the right product, which they found drives higher conversion than minimal-friction signups.
Amole advocates adding productive friction to onboarding flows, citing successful tests at Masterclass and Mercury where quizzes or multi-step forms increased user comprehension and long-term revenue.
AI's leverage is currently greatest for engineers, straining PM and designer ratios; Anthropic's growth team addresses this by having engineers own projects under two weeks of work as 'mini PMs'.
Amole uses Claude and Co-Work to automate managerial tasks like identifying team misalignment, summarizing key metrics, and generating self-critiques modeled on his manager's feedback style.
Anthropic's early strategic focus on AI coding was driven by a dual belief in its commercial potential and its ability to create a feedback loop accelerating their own AI research.
The company's culture of openness includes internal 'notebook' channels where employees, including leadership, share thoughts publicly, which Amole believes scales beliefs and aids AI agents with context.
Amole advises PMs to double down on their unique interdisciplinary spikes, like combining finance or sales with product skills, to maintain a competitive edge in an AI-augmented workplace.
Anthropic operates as a Public Benefit Corporation, legally prioritizing public benefit over shareholder value maximization, which informs growth decisions to forgo controversial tests for safety.
Simon Willison identifies November 2025 as an AI inflection point when GPT-5.1 and Claude Opus 4.5 crossed a threshold to become reliable coding agents.
Willison says 95% of the code he now produces is typed by AI agents, not by himself.
AI-powered 'vibe coding' enables non-programmers to build prototypes by describing what they want, democratizing basic software creation.
Willison distinguishes professional 'agentic engineering' from amateur vibe coding, arguing the former requires deep software engineering experience to deploy safely.
The 'dark factory' pattern describes fully automated software production where no human reads the code, only reviewing outputs from simulated tests.
Strong DM spent $10,000 daily on tokens to run a 24/7 swarm of AI agents simulating end-users for testing their security software.
AI models are now credible security researchers; Anthropic discovered and responsibly reported around 100 potential vulnerabilities in Firefox.
Willison finds that using four coding agents in parallel is mentally exhausting, often leaving him cognitively wiped out by 11 a.m.
He argues AI amplifies the skills of senior engineers and accelerates junior engineer onboarding, but creates uncertainty for mid-career professionals.
Cloudflare and Shopify hired 1,000 interns in 2025 because AI assistants reduced their onboarding time from a month to a week.
The core challenge of AI is that code generation is now cheap, forcing a rethink of software development processes and bottlenecks.
Willison advocates for 'red/green TDD' as a prompt to make coding agents write tests first, run them to fail, then implement code to pass.
He recommends starting projects with a thin, opinionated code template so AI agents infer and adhere to preferred coding patterns.
Willison coined the term 'prompt injection' but regrets it, as it misleadingly suggests a fix akin to SQL injection, which doesn't exist.
He defines the 'lethal trifecta' as a system where an agent has access to private data, accepts malicious instructions, and can exfiltrate data.
Willison predicts a 'Challenger disaster of AI' due to the normalization of deviance around unsafe AI usage, though it hasn't materialized yet.
He uses Claude Code for web over local versions because running agents on Anthropic's servers limits security risks to his own systems.
Willison created the 'pelican riding a bicycle' SVG benchmark, finding a strong correlation between drawing quality and overall model capability.
He maintains public GitHub repos like 'tools' and 'research' as a hoard of proven code snippets and agent-run experiments for future reuse.
Data labeling companies are buying pre-2022 GitHub repositories to train models on purely human-written 'artisanal' code.