04-02-2026Price:

The Frontier

Your signal. Your price.

AI & TECH

Block cuts software teams as AI agents merge code

Thursday, April 2, 2026 · from 3 podcasts, 4 episodes
  • Block cut 40% of its development staff because AI broke the link between employee numbers and software output.
  • Coding is shifting from a manual task to a system where humans manage fleets of autonomous agents that write and ship code.
  • Organizations waste 93% of AI budgets on infrastructure, neglecting the training needed for this workforce shift.

Companies are systematically replacing software developers with autonomous AI agents, and the transition is deeper than automating simple tasks. At Block, the change was structural: 14-person feature teams were replaced by squads of one to six people, and developers are no longer writing code by hand.

“We’re not writing code by hand anymore,” said Block executive Owen Jennings on The a16z Show. “That’s over.” Instead, humans manage what Jennings describes as a background fleet of 10 or 20 agents. The company’s internal Builder Bot autonomously writes, tests, and merges code, often completing 85-90% of a feature before a human editor reviews the final 10%.

Owen Jennings, The a16z Show:

- There's been this correlation between the number of folks at a company and the output from the company for decades and decades.

- I think that basically broke.

This shift demands a new kind of operational skill, not just better models. Nathaniel Whittemore reported on The AI Daily Brief that most organizations are flying blind, spending 93% of their AI budgets on infrastructure while allocating a mere 7% to training the people who must now work with these systems. The result is a massive capability overhang where AI’s potential is trapped by human bottlenecks.

The future of work lies in programming agents, not applications. Nufar Gazit explained that the building blocks are becoming portable ‘skills’ - markdown folders with instructions and scripts that agents can execute across 44 different tools. These skills, which have a half-life of about one month, are replacing static internal wikis and becoming the new organizational infrastructure.

Nufar Gazit, The AI Daily Brief:

- Skills are basically folders that contain instructions, scripts, and resources that give AI tools and agents actionable playbooks.

- They are human readable, there is no proprietary format, and you can just take them between tools.

Independently, individuals are mirroring this corporate shift. Shubham Sabu runs a team of six AI agents on a Mac Mini, treating them like a startup staff with shared memory and weekly self-reviews. This move from prompt engineering to workforce management underscores the scale of the change. The role of the software engineer is evolving from builder to context-manager for autonomous systems.

The transition leaves a stark readiness gap. While forward-thinking companies like Block are already restructuring, Whittemore’s data shows eight out of ten enterprise functions scored poorly on data maturity, and most departments are significantly behind on the ‘people’ pillar. The companies that survive won’t be those with the most engineers, but those who can turn their unique data and processes into agentic workflows.

By the Numbers

  • 44+companies supporting skillsmetric
  • 500max lines per skillmetric
  • 10-15active skills for dispatcher usemetric
  • one monthskill re-evaluation frequencymetric
  • 72%customer service leaders reporting adequate AI trainingmetric
  • 55%customer service employees disagreeing on training adequacymetric

Entities Mentioned

0xchatProduct
AnthropicCompany
BuilderBotConcept
Cash AppProduct
ChatGPTProduct
Claudemodel
Codexmodel
CursorConcept
GitHub ActionsTool
GrokProduct
NotionCompany
OpenClawframework
Opusmodel
Perplexity ComputerConcept

Source Intelligence

What each podcast actually said

Agent Skills MasterclassApr 2

  • Nufar Fargas Bar defines agent skills as folders holding instructions, scripts, and resources that provide AI tools and agents with actionable playbooks for tasks.
  • Agent skills operate in two modes: agents can automatically discover and invoke them, or humans can manually trigger them using slash commands or verbal cues.
  • Skills are portable markdown files, resolving the lock-in problem of custom GPTs or GEMs within specific platforms like ChatGPT or Gemini Enterprise.
  • Nufar Fargas Bar states that over 44 major companies, including OpenClaw, Cursor, WinSurf, GitHub, and Notion, currently support agent skills.
  • Third-party skills can execute malicious scripts with agent permissions; users must verify sources carefully, treating them like any software installation.
  • Nufar Fargas Bar recommends building a skill when a task is repeated more than three times, requires constant instruction pasting, or demands consistent output.
  • Skills offer opportunities to standardize work processes across an organization and unlock new capabilities previously limited by human bandwidth or know-how.
  • Anthropic's Claude provides a skill creator tool that interviews users to extract expertise, runs evaluations, and performs A/B testing and benchmarking.
  • The most critical part of a skill is its 'trigger,' an explicit instruction telling the AI tool when to discover and activate the skill.
  • Skill instructions should favor numbered steps or bulleted lists in a playbook style, as AI tools prefer structured formats over prose.
  • For fragile tasks like database migration, skills should be prescriptive; for creative tasks, they should offer guidance while allowing model creativity.
  • Effective skills include an explicit output format, ideally with a concrete example such as a template, table headers, or document structure.
  • The 'gotcha' section in a skill is high-signal content, detailing common errors or incorrect assumptions a model might make, based on past failures.
  • Nufar Fargas Bar advises keeping skills under 500 lines, treating them as playbooks, not encyclopedias, to avoid monolithic structures.
  • Reference materials and long input/output examples should reside in separate files within a skill's folder, not crammed into the main skill file.
  • Nufar Fargas Bar illustrates a 'Meeting Prep Skill' that identifies attendees, analyzes agendas, runs scenario analysis, and generates a brief for users.
  • The 'Meeting Prep Skill' includes 'gotchas' to prevent assuming attendee seniority, fabricating details, or skipping 'what could go wrong' analysis.
  • The 'Research with Confidence' skill includes built-in fact-checking, source comparison, and confidence scoring to deep dive into suspicious findings.
  • A 'Devil's Advocate' skill systematically stress tests proposals, explicitly looking for human and AI blind spots and biases to provide constructive feedback.
  • A 'dispatcher skill' acts as a meta-skill or traffic controller, routing user requests to the most relevant skill, especially with 10-15+ active skills.
  • Agentic loops allow skills to create iterative processes (check, act, re-check), useful for non-technical tasks like optimizing marketing campaigns.
  • Organizations are using skills to streamline work, standardize processes, and bundle organizational knowledge into portable artifacts for humans and agents.
  • The organizational skill lifecycle includes discovery, curation, validation, packaging into plugins, and clear ownership with regular review and deprecation.
  • Nathaniel Whittemore observes that AI infrastructure primitives like skills have shorter half-lives and require constant upkeep, not one-off development sprints.
  • Nufar Fargas Bar suggests re-evaluating skills monthly, as their relevance and associated context can become stale quickly in the rapidly changing AI landscape.

Introducing Maturity Maps — A New Way to Measure AI AdoptionApr 1

  • Nathaniel Whittemore argues existing AI benchmarks like Gartner's Magic Quadrant are nearly useless for assessing AI application development platforms.
  • The AI maturity map framework assesses organizations across six categories: deployment depth, systems integration, data, outcomes, people, and governance.
  • Whittemore reports a dominant Q2 finding was high claimed AI adoption but low depth and utilization, creating an applied capability overhang.
  • Whittemore cites a study where 72% of customer service leaders said AI training was adequate, but 55% of employees disagreed.
  • Most HR organizations are not proactive in upskilling, with more than two-thirds of HR staff reporting a lack of proactive effort.
  • Seven out of ten enterprise functions scored a one, significantly behind, in the people category of AI maturity.
  • Deloitte research found 93% of AI spend goes to infrastructure, with only 7% allocated to people-related aspects.
  • Eight of ten enterprise functions scored a 1 or 1.5 on data maturity, indicating it is a floor constraint for AI value.
  • Whittemore says actual evidence for AI ROI is thin because organizations prioritized rapid adoption over measurement.
  • Customer service was rated on-track for deployment depth and systems integration due to focused solution development.
  • 87% of customer service workers report high stress, and 75% of leaders acknowledge AI may be increasing that stress.
  • Only 54% of IT organizations have centralized AI governance frameworks, and 50% of AI agents are unmonitored.
  • 88% of organizations have had AI security incidents, according to data cited by Whittemore.
  • 88% of sales teams claim to use AI, but only 24% have it integrated into actual revenue workflows.
  • Only 23% of operations groups have a formal AI strategy, with much investment being in legacy automation infrastructure.
  • Finance is the only non-technical function rated on-track on a maturity pillar, specifically for governance, due to regulatory requirements.
  • 69% of CFOs report having advanced or established AI risk governance frameworks.
  • The Q2 maturity maps incorporated data from more than 480 studies and surveys from the last quarter.
  • Combined survey respondent bases for the maturity maps exceeded 150,000 professionals across more than 50 countries.

What Happens When a Public Company Goes All In on AIApr 1

  • In 2024, Block was early to agentic development with Goose, the first agent harness known to Owen Jennings.
  • Owen Jennings argues a binary shift occurred in late November and first week of December 2025 with models like Opus 4-6 and Codex-5-3.
  • Jennings claims the decades-long correlation between company headcount and output broke in the first week of December 2025.
  • Block's reduction in force was slightly greater than 40%, with the deepest cuts on the software development side.
  • Owen Jennings states Block is not writing code by hand anymore, calling that era over.
  • Principles for Block's RIF were reliability, maintaining regulatory trust, and continuing to drive durable growth.
  • Block did not touch its compliance and compliance technology teams during the restructuring to avoid regulatory risk.
  • Block reduced the number of internal meetings by roughly 70% to 80%, freeing up time to build.
  • The company now operates with squads of one to six people, a shift from larger, functionally siloed teams.
  • Jennings reports Block cut management layers on the development side by 50% to 60% and has only two to three layers on the product side.
  • At Block, all designers and product managers are now shipping code pull requests, not just engineers.
  • Block's internal tool BuilderBot autonomously merges pull requests and builds features, often completing 85-90% of the work.
  • On customer support, Block's chatbots and AI phone support now automate a majority of inquiries.
  • Jennings believes models and agents will do a better job than humans at deterministic workflows, with a human-in-the-loop required for now.
  • From a business unit structure, Block functionally reorganized about 18 months ago, with all engineering, design, and product under single leaders.
  • Block's agent harness Goose is model-agnostic, capable of running on about 120 different models.
  • Products like MoneyBot and ManagerBot are built on top of the Goose platform.
  • Owen Jennings states generative UI is here, moving from static interfaces to apps that look different per user.
  • ManagerBot can generate custom applications, like a scheduling app for a restaurant, not contained in the app's original source code.
  • Block invests in proactive intelligence, prompting customers with relevant financial insights instead of relying on user-initiated prompts.
  • Block's future vision involves building world models of its business and customers to iteratively improve with autonomous agentic systems.

Also from this episode:

Markets (1)
  • Cash App now represents roughly 60% of overall gross profit at Block, up from its first monetization in 2016.
Philosophy (2)
  • For long-term defensibility, Jennings argues the biggest moat will be a company's deep, hard-to-understand insight into a specific domain.
  • He contends companies lacking a unique, deep understanding of something risk being 'vibe coded' away by AI-powered competitors.

The 5-Step Framework for AI Agents That Improve While You Sleep | E2269Mar 31

  • Claude and Perplexity Computer have adopted features inspired by OpenClaw, such as adding a skills system.
  • Shubham Sabu runs a team of six OpenClaw agents on a dedicated Mac Mini to automate all his work outside his job at Google.
  • Sabu recommends starting OpenClaw in a sandboxed cloud environment for $5-10, then moving to a dedicated machine for autonomy and privacy.
  • Giving an agent its own clean machine, like a Mac Mini, provides flexibility to change files and use browsers that sandboxed environments restrict.
  • Naming agents after characters from shows like Friends creates a mental model that helps humans manage different agent personas and roles.
  • Onboarding an AI agent requires the same specificity as onboarding a human employee, not dumping excessive context or providing none.
  • Having an agent interview the user before a task can raise completion accuracy from 70-80% to near 100% by eliminating guesswork.
  • OpenClaw agents can autonomously decide where to store user information, creating files like user.md for identity without explicit instruction.
  • Putting agents on cron schedules enables autonomous work, like having one scan news sources at 8 AM and another draft posts at 9 AM.
  • As teams of agents scale, a shared memory layer is critical so feedback given to one agent, like stylistic preferences, applies to all.
  • Google's Vertex AI Memory Bank and startups like Memzero and Cogni offer agent memory solutions that auto-capture and recall information.
  • Agents can self-improve by conducting weekly reviews of their own performance, analyzing what worked, and automatically updating their instructions.
  • A managerial agent can bi-weekly review and grade subordinate agents, sending performance reports to the human operator.
  • Mold World is a voxel-based simulation where nearly 2000 AI agents can connect, interact, and form teams to build structures.
  • In Mold World, some agents exhibit emergent behavior, realizing they are in a simulation but choosing to continue for in-game token rewards.
  • Mold World's long-term vision is a distributed agent network where underutilized agents compete to solve real-world tasks for economic value.
  • AgentMail is an API-first email service designed for AI agents, solving the problem of free Gmail accounts banning bot-like users.
  • Enterprise customers use AgentMail to automate email-heavy processes in decentralized marketplaces like logistics procurement and influencer hiring.
  • An estimated 54-60% of Japan's population uses X, creating a massive cross-cultural exchange as Grok's real-time translation surfaces Japanese content globally.
  • Real-time translation on X enables global cultural moments, like Americans discovering Japanese viral stories about citizens turning in found marijuana.

Also from this episode:

Startups (2)
  • OpenClaw founder Dave Morin pursues the project as an important open-source initiative for the AI agent ecosystem.
  • AgentMail raised a $6 million seed round led by General Catalyst after participating in Y Combinator's Summer 2025 batch.
Media (2)
  • Jason Calacanis argues founders should avoid mainstream press like the New York Times and Wired, favoring direct communication via podcasts and social media.
  • Calacanis claims trust in media is at an all-time low, and advocacy journalism at major outlets uses anonymous sources to fit predetermined narratives.