Autonomous Agents · April 2026

Building an Autonomous Product Team

What happens when you give AI agents the same structure as a real product org — PMs, engineers, QA, cross-team sync — and let them run sprints on their own.

The Idea

I wanted to see if AI agents could function like a product team. Not a chatbot that answers questions. Not a copilot that helps you write code. A full team — with roles, sprints, quality gates, feedback loops, and the ability to get better over time without being told how.

The product they're building is an intelligence system for my work as a venture investor. Four specialist agents analyze my meetings, research, relationships, and strategy. A web app renders their output. The "product team" builds, maintains, and improves all of it.

It took five major rewrites to get something that works. Here's what I learned.

The Journey: Five Rewrites

V1-V2

The Loop That Couldn't Ship

The first attempt gave each agent a 12-step loop: gather context, research, plan, build, review, deploy, repeat. It sounded reasonable. In practice, each loop took 2-5 hours and 58-86% was overhead — gathering context, researching things already known, reviewing work three times. Agents spent most of their time preparing to work rather than working.

Worse: deploy failures ate 30% of loops. Agents built things but couldn't get them to production. And when they did ship, quality was stuck at 4.5/10. Scores oscillated — one agent went 3, 5, 6, 7, then crashed back to 4. There was no mechanism to translate "this is bad" into "here's what to do differently."

Lesson: Process without learning is just expensive repetition.

V3

Sprints, Not Loops

The breakthrough was treating agents like a real product team. Instead of one big loop, each agent runs short sprints with four phases: Plan, Build, Verify, Sync. Each sprint takes about an hour. Seven agents run in parallel, like seven engineers working on different parts of the product at the same time.

The key innovations: agents produce their own UI specifications (no waiting for a separate frontend team), every third sprint is dedicated to research and self-improvement (not just feature work), and all phase sequencing is enforced by the system — agents can't skip steps or cut corners.

Build time went from ~40% to ~80% of each cycle. Agents started actually shipping.

Lesson: Structure an AI team the same way you'd structure a human team — short iterations, clear phases, parallel work.

V4-V5

Learning to Learn

Sprints shipped faster, but the output wasn't getting better. Each sprint started from scratch — yesterday's lessons were gone. The system was productive but amnesic.

V4 added a knowledge compound loop: after every sprint, the system distills what it learned into structured, searchable knowledge. Not raw notes — tagged entries with problem type, root cause, resolution, and severity. The 50th time an agent encounters "rendering breaks on mobile," it finds the answer in seconds because the first 49 instances are catalogued.

V5 added cross-agent learning. When one agent discovers something relevant to another, it files a structured request. A dedicated quality process reads all agents' findings, spots patterns across the team, and calibrates scoring standards.

Lesson: Shipping is necessary but not sufficient. The system must learn from its own output — and share those learnings across the team.

Current

The Diamond System

The current system has seven agents running six-phase sprints with specialist identities, product-level reasoning, structured evals, and human feedback that compounds into better performance. It's the autonomous product team I originally envisioned — and it took five rewrites to get here.

But the most important change wasn't technical. It was recognizing that agents score their own work 8/10 while the actual quality is 2.5/10. The system was executing process perfectly without thinking about product. The fix wasn't better agents — it was giving the system the same judgment a real PM and engineering lead would bring.

Lesson: Autonomous doesn't mean unsupervised. The human's role shifts from doing the work to evaluating the output and setting direction.

How It Works Today

The system runs like a small product organization. Here's how it maps to a real team:

The Autonomous Product Team
Roles
7 Specialists
4 intelligence agents (comms, data, research, strategy) + 2 platform agents (frontend, infrastructure) + 1 innovation lab. Each owns their domain completely.
Process
6-Phase Sprints
Plan → Spec → Build → Review → Verify → Sync. Enforced by the system. No shortcuts. Every 3rd sprint is dedicated to self-improvement.
Quality
Eval + Feedback
Human evaluates output against the original vision. Scores get distilled into specific lessons that feed the next sprint. The system gets measurably better.

Machines as Microservices

Each agent is like a microservice in a software system. It owns its domain, maintains its own codebase, runs its own sprints, and communicates with other agents through structured interfaces — not ad-hoc messages.

When one agent needs something from another, it files a formal cross-request. When an agent finishes a sprint, it publishes a sync update that other agents can read. No agent reaches into another agent's domain. The boundaries are enforced, not suggested.

This means each agent can evolve independently. If the communications agent needs a better way to detect meeting obligations, it researches, builds, and deploys the upgrade in its own sprint cycle — without coordinating a "release" with the other six agents. Just like microservices.

The Six-Phase Sprint

1

Plan

The agent reads user feedback, cross-team requests, and its accumulated knowledge. It decides what to work on — not just what was asked for, but what actually moves the product forward. Product thinking, not task execution.

2

Spec

Before building, the agent writes a specification: what will change, what are the acceptance criteria, what could go wrong. This catches bad ideas before code is written — not after.

3

Build

The agent builds the feature, fix, or improvement. It also runs the agent's actual intelligence work — processing data, generating analysis, producing output. Building the engine and driving the car at the same time.

4

Review

The agent reviews its own build against the spec. Did it actually address the plan? Does the code work? This is the engineering quality gate.

5

Verify

Independent quality check. The agent opens a browser, looks at the live product, and verifies the output renders correctly and meets the criteria. Not "does the code compile" but "does this actually look right."

6

Sync

The agent publishes what it built, what changed, and what it needs from other agents. This is how the team stays coordinated without meetings.

The Feedback Loop That Makes It Compound

The most important part of the system isn't any individual phase. It's the feedback loop that connects the end of one cycle to the beginning of the next.

The Compounding Loop
Step 1
Output Ships
Agents complete sprints. Features, fixes, and intelligence output go live on the product.
Step 2
Human Evaluates
I review the output. For each piece of feedback I gave: was it addressed? How well? What's still missing? I score it.
Step 3
System Learns
Scores get distilled into specific, actionable lessons. These flow into the next sprint's planning. The system gets better at the things that matter to me.

This is where it stops being "automation" and starts being an autonomous team. The system doesn't just execute instructions — it observes the gap between what it produced and what was needed, distills that gap into a lesson, and applies that lesson in the next round. Over 10 sprints, the gap narrows. Over 50, the system knows your standards better than you could ever write them down.

Seven Principles I Learned the Hard Way

5
Major rewrites to get it right
7
Parallel agents in the current system
6
Phases per sprint
~1hr
Per sprint cycle

1. Enforce, Don't Suggest

If agents can skip a step, they will. Phase ordering, quality checks, and feedback reading are enforced by the system — not by instructions in a prompt. Instructions get diluted. Enforcement doesn't.

2. Agents Need Boundaries

Each agent owns exactly one domain. They communicate through formal interfaces, not by reaching into each other's work. Clear boundaries prevent the chaos that makes multi-agent systems fragile.

3. Self-Assessment Is Unreliable

Agents rate their own work 8/10 when human judgment says 2.5/10. Quality must be measured externally — by a separate verification phase and by human evaluation. Never trust a builder to grade their own work.

4. Learning Must Be Structured

Agents that write "lessons learned" in prose forget everything by next sprint. Lessons must be tagged, categorized, and injected into the next planning cycle automatically. Unstructured memory doesn't compound.

5. Ship Before Perfecting

The first two versions tried to build the perfect loop. V3 shipped a good-enough sprint system and iterated. Every version since has been an upgrade to a running system, not a redesign from scratch.

6. Humans Set Direction, Machines Execute

"Autonomous" doesn't mean "unsupervised." The human's job is vision, evaluation, and course correction. The machines' job is execution, learning, and compounding. Mixing these roles breaks both.

Principle 7: The goal is not to replace a product team. The goal is to give a solo founder the throughput of a team — with the quality bar set by the founder's judgment, not by the machines' self-assessment.

What This Means

A year ago, the idea of AI agents running a product team was science fiction. Today I have seven agents doing parallel sprints, shipping features, verifying their own output in a browser, learning from feedback, and getting measurably better over time.

It's not perfect. The system still needs human evaluation to stay on track. The agents still sometimes build the wrong thing. Quality gaps between what they think they produced and what they actually produced remain real.

But the trajectory is clear. Each rewrite made the system meaningfully more capable. The feedback loop means it improves without being redesigned. And the microservice architecture means individual agents can evolve independently.

The question isn't whether AI agents can function as a product team. They can. The question is how fast the gap between agent judgment and human judgment closes. Every sprint, it gets a little smaller.

The real unlock isn't that agents can code. It's that agents can plan, prioritize, evaluate, learn, and coordinate — the same things that make human teams effective. Structure turns raw capability into compounding output.