Stage 2: Organize
If Stage 1 is where you form the right question, Stage 2 is where you answer it — function by function, accountability by accountability, before a single agent is deployed.
This is the most operationally demanding stage, because it asks the organization to do something that most companies have never done: make explicit decisions about who is accountable for what, and whether that accountability belongs to a human, an AI agent, or a designed combination. That work feels uncomfortable. There is no software that does it for you. It requires leaders who are willing to sit with ambiguity long enough to make a real design decision.
The organizations that do this work build the foundation that makes every subsequent stage faster and cheaper. The organizations that skip it are back in the headcount paradox by Stage 3.
Accountability First, Then Data
The original instinct in Stage 2 is to get organized: document everything, build a prompt library, standardize the team’s AI usage, create training materials. This is not wrong. Structure matters. But the structure you need to build first is not a documentation system. It is a workforce architecture.
Before you organize your data, you have to organize your accountability. You cannot design a useful knowledge system until you know who is using it, for what purpose, and whether that user is a human or a machine. You cannot design an agent team until you know what accountability the agent team owns and who supervises it. All of the organizational work that Stage 2 traditionally asks for — data hygiene, process standardization, capability building — is downstream of the fundamental design question: who owns what?
The Hybrid Accountability Chart
The Hybrid Accountability Chart is the primary deliverable of Stage 2. It is a named, teachable instrument in the Compound methodology, and it deserves its own section here because it is unlike anything in the traditional management toolkit.
If you run on EOS, you have an Accountability Chart — a visual map of who owns what in the organization, with a single name against each accountability. The Hybrid Accountability Chart extends this concept to include AI agent teams alongside human roles.
For each function or accountability in the business, the Hybrid Accountability Chart answers four questions:
- What role or function does this team perform?
- What is the name of the agent team, if applicable?
- Who is the human supervisor?
- Is this accountability AI-assisted — meaning a human is in the loop reviewing every output — or is it fully automated, meaning the agent executes and the human reviews results on a schedule?
Every agent team has a human supervisor. There are no unowned teams. The chart makes the design decision explicit and visible before anything is built.
The chart looks like this in practice:
| Role / Function | Agent Team Name | Human Supervisor | AI-Assisted or Automated |
|---|---|---|---|
| Sales Quoting | Quote Generation Team | Sales Director | AI-Assisted |
| Content Production | Content Ops Team | Marketing Lead | Automated |
| Client Onboarding | Onboarding Agent | Ops Manager | AI-Assisted |
| Financial Reporting | Reporting Team | CFO | AI-Assisted |
The specific agent teams and their level of automation are design decisions — not technology decisions. They depend on the stakes involved, the maturity of the data, and the organization’s comfort with agent-level judgment in that function. High-stakes decisions with ambiguous inputs call for a human in the loop. Repetitive, well-defined work with clean data is a candidate for automation. Most accountabilities start AI-assisted and move toward automation as the design matures.
This is not a soft “humans and AI work together” platitude. It is a specific organizational decision that needs to be made, documented, and owned by someone. The Hybrid Accountability Chart is what makes that decision concrete and visible.
The four questions are not a template to fill out once and forget. They are a design discipline: every single agent team needs a human supervisor, in the same way every employee needs a human supervisor. The chart is what makes that explicit — and what ensures there are no teams, human or AI, operating without named accountability.
The chart is built in the Design phase of the Compound Sprint, for the specific accountability being worked on in that sprint. Over time, as the organization runs more sprints, the chart fills in. Eventually it becomes a full picture of the organization’s Human+AI workforce — not just the human org chart, but the whole operating architecture.
Each row you add to this chart is a structural decision about how your organization operates — and what you are building, row by row, is the Orchestrated Organization itself. Not a finished state arrived at all at once, but a picture that becomes more complete and more deliberate with each sprint. The chart is what makes the Orchestrated Organization visible. Without it, you have a set of agent deployments. With it, you have a designed workforce.
Before You Organize Data, Organize Accountability
The original impulse in Stage 2 is to audit your data: what do you have, where does it live, what naming conventions are you using, what’s the quality? That work is real and will need to be done. But it is in the wrong sequence.
The reason data audits fail to produce useful AI infrastructure is that they are conducted without a design question. You inventory your data without knowing what decision it needs to support, which agent team will be using it, or what format that agent team needs it in. So you end up with a clean, well-organized data library that the agent team cannot actually use — because the data was organized for humans, not designed for the agent’s specific accountability.
Accountability design precedes data design. Once you know which accountability you are building for, you know which data matters. You know who the user is. You know what format the agent needs. The Source phase of the Compound Sprint — the phase immediately after Signal — is where this data mapping happens, in service of a specific design decision already made. It is not a general-purpose audit.
Signal → Source → Design: Act One of the Sprint
Stage 2 is where the first act of the Compound Sprint becomes operational. Act One — Signal → Source → Design — is the diagnostic and design sequence that every sprint begins with. The full Sprint is covered in its own section later in this book; what matters here is the sequence and why it matters in Stage 2.
Signal is finding the real constraint: the specific operational problem that costs the most in dollars, time, or margin. Not the symptom. The root cause, quantified. The discipline of Signal is in refusing to move to Source until the constraint is named precisely and validated against real data. Organizations that skip Signal spend the rest of the sprint solving the wrong problem.
Source is mapping the knowledge and data infrastructure that currently exists around the constraint. What do you know? Where does it live? What is missing? This phase makes the invisible visible — including the data gaps that would cause an AI deployment to fail if they were not addressed before building.
Design is laying out the Human+AI workflow. This is where the Hybrid Accountability Chart gets built for the specific accountability being addressed. Who owns the outcome? What does the agent team execute? Where are the handoffs? What does the human supervise? The output of the Design phase is a complete workflow specification — specific enough that the Build phase can execute against it without ambiguity.
That discipline — nothing built until Design is locked — is not bureaucratic. It is the reason Act Two produces results when other approaches produce nothing. You cannot build what has not been designed.
Warning Signs You Are Stuck Here
The warning signs in Stage 2 are different from Stage 1. The organization is not failing to engage with AI — it is building the wrong things. You are stuck in Stage 2 if: your team is building automations and agent workflows without a completed Design phase, meaning they are making it up as they go. You have a prompt library and a data audit but no Hybrid Accountability Chart. Your champion is building training materials when they should be facilitating design sessions. New AI deployments are starting with “which tool should we use?” rather than “what constraint are we solving?”
These are signs of task-orientation bleeding into Stage 2. The frame has not fully shifted. The organization is still asking “how do we use AI?” rather than “what are we designing, and for whom?”
Ready for Stage 3?
You are ready to move to Stage 3 when you have completed at least one full sprint through the Design phase — meaning you have a Hybrid Accountability Chart entry for at least one accountability, with an agent team named, a human supervisor assigned, and the workflow designed. You have also done the Source work for that accountability, so you know the state of your data infrastructure for that function.
You do not need a complete Hybrid Accountability Chart for the entire organization. One completed, well-designed entry is enough. That is the foundation. Stage 3 is where humans learn to direct the teams described in that chart.