Work Registry
The feedback email came in at 11:31 AM. I saw it in my inbox, but we have an automation that creates a Linear issue automatically. It’s triggered when this specific email address is used—the one I tell everyone to send feedback to. It creates the issue in Linear Triage, a dedicated staging area before routing the issue to its next destination. Each issue is either:
- Accepted and moved to a list.
- Declined and canceled.
- Marked as a duplicate.
- Deleted outright.
This particular issue was a regression. I knew the culprit immediately. We made a change to hide functionality from anonymous users, which also hid it from specific organization accounts, too. I started a Claude Code session to fix the regression from the previous change. It identified the issue, I confirmed it, and the Pull Request (PR) was up within minutes. Our build process ran automatically on the PR to run tests, code linters (styles, formatting), and created a deployment URL. Each PR gets its own staging environment for testing. This is a massive improvement for rapid development.
I tested the fix on the deployment URL, re-reviewed the changes in the PR, and let another AI agent automatically review the code. It didn’t have any comments. I requested a review from the engineering team. It was approved, so I merged it in. It didn’t require full QA, so I skipped that step. The same build and deployment steps run again when merging it into the main branch, which deploys the change to production once it successfully completes. The change was live by 12:29 PM.
Less than an hour from the feedback coming to resolved in production.
The Linear timeline on the issue told the story of the entire process, too:
- Drew Barontini created the issue
- Linear assigned Drew Barontini
- Drew Barontini set to priority High and added labels (Fix, etc.)
- Drew Barontini moved from Triage to Queue
- GitHub moved from Queue to In Progress
- GitHub linked (title of PR)
- GitHub moved from In Progress to In Review
- GitHub moved from In Review to Done
You know what’s crazy? The only step above that wasn’t automated was step 3. Everything else was orchestrated by automations within Linear and other tools, like auto-creating the issue from the original email. And step 3 actually has some automation because of a Linear feature called Triage Intelligence that uses AI to suggest attributes (assignees, priority, labels, etc.) to apply to issues in Triage. I still manually apply these, but the suggestions speed up the process. I can set it to auto-apply the suggested attributes, but this is an area where I want to be the human in the loop. Defining the taxonomy of work is of critical importance because it supports all the ways you interact with the information.
If you’ll indulge me for a moment, I can illustrate the impact this has on a team’s output. I measured code throughput for our team for the five months I’ve been here against the prior five months before I started.
The results?
- 60% more code commits
- 5x higher engineering throughput
- 12x higher net code growth
There’s an art to managing work on a team. It starts with building the system of work.
This idea is called Work Registry, which lives in the Clarity Codex of the Claritorium and Value Creation of Equilio.
This idea is part one of a three-part series:
- Work Registry: How to make work visible, intentional, and actionable.
- Intelligence Orchestration: How to combine human judgment and AI leverage.
- Human Calibration: How to preserve shared understanding and quality judgment.
Let’s focus on the Work Registry, which is made up of five key parts of the process:
- Capture
- Classify
- Route
- Evolve
- Elevate
Capture
When I started my new role, I tried to find the source of truth for the work. I wanted to know where work was tracked, discussed, and adjusted as priorities change.
But I couldn’t find it—at least not really.
There was Jira. My complete disdain for Jira aside for a moment, it’s still a place you can track the work. Does it make it infinitely more painful and difficult to do so? Yes, but it’s still possible. So I looked there. The PM told me it was in Jira, but activity was low, the comments were mostly from said PM, and there were a lot of “ghosts” in the system—stale work with unknown status reduced to noise.
Systems are dynamic. A current moves through them like electricity, stimulating the flow of information as context evolves. When you reduce a system to a static entity, it erodes trust in the work. Nobody knows where to go to figure out what’s going on.
That was the biggest problem I noticed: work was lost. And priorities can’t exist when the work is invisible. There’s nothing to compare. Should we work on this vs. that becomes an impossible question. If you want to figure out what to work on, you first need to know what you could work on. To do so requires a centralized system of data.
Capture is the first step in the Work Registry because, without context, nothing downstream can happen efficiently or effectively. The job of Capture is to make sure nothing valuable gets lost. It turns scattered signals into registered artifacts inside the Work Registry. What should you capture? Anything valuable: ideas, requests, bugs, tasks, feedback. If it feels important, add it.
Capture is where you collect information before classifying it. I mentioned we were using Jira before, but now everything lives in Linear. I made this choice for a number of reasons, but a particular one was how easy Linear makes it to create issues from anywhere. Capture needs to be frictionless so the entire team can take part in tracking all of these signals. Good decisions come from rich context, and Capture is how you start the context thread for all the work.
Principles
- Easy to add. Make adding work as easy as possible for anyone on the team.
- One door in. Work can come from anywhere, but it ends up in one place.
- Neutral by default. Nothing coming in is judged upon creation. That’s next.
Practices
- Add to Linear: Build a team norm of adding the work to your registry, no matter what system you choose. Make it a meme. If it’s work to do, it goes in Linear.
- Minimum Viable Ticket (MVT): Reduce the required information to a formatted title, clear description, and source for where the work originated. That’s it. Make it easy to create tickets for the work.
- Capture Automations: Figure out how to automate tracking work in every channel where you discuss work: email, meetings, chats, forms, support.
Classify
Every issue added to Linear starts in Triage, a dedicated staging area like an email inbox for captured work. There are three techniques I use to improve this process:
- AI Summaries: When emails are automatically forwarded into Triage, I run them through an LLM (using an automation app called Relay) to synthesize and summarize the email thread at the top of the issue. This makes it easier to scan, label, and prioritize.
- Triage Intelligence: Linear’s Triage Intelligence feature suggests labels, assignees, projects, related issues, and priority levels. They can be auto-applied or manually selected. I love tagging things, so I manually apply them, but the suggestions speed up the process.
- Product Landscape: The Product Landscape is reflected as a set of label groups and labels, which create a clear taxonomy to identify each issue. The landscape makes it easy to identify what belongs where, and to apply the right labels to route the work to the right place.
Linear contains everything, so there needs to be a clear classification for each issue in the system to determine where it goes. I use three three layers of classification:
- Type: What kind of work is this?
- Landscape: Where does this live?
- Priority: How much attention does this deserve?
Note: A clear title and description is the prerequisite. Whether you’re handing off work to humans or AI agents, clarity through context matters. It’s often the difference between completing something and not.
Type
This layer isn’t about urgency or importance. It’s about the type of work.
Each item gets exactly one type:
- Idea
- Task
- Fix
- Improvement
- Bet
The type is critical, so let’s talk about each type and what the work looks like.
Idea
An idea is vague, but directional. It’s often framed as a hunch, a question, or an observation. They exist to be explored, not executed. They transform over time or fade out when they lose meaning.
Example: Add AI summaries to report pages
Task
A task is a discrete unit of effort. It’s not anything directly actionable in the product. It surrounds and supports the work. They create clarity and enable progress. I often remind engineers that an issue can be a task to figure something out, to surface the unknowns.
I also label “task types” as descriptors:
- Design Task
- Strategy Task
- Research Task
Example: Define the first releasable iteration
This was an actual task I used as a sub-issue for a larger bet. I used Claude Code to take the comments and context on the Linear issue to review the code and draft an implementation plan. Oh, and it did this while I worked on other things. It’s not the end state, but it’s movement, motion, progress with directional velocity. I can then convert that energy with my own ideas, further conversations with the team, and sketching and ideating while creating working prototypes in code. It’s a wonderful experience as a Product Builder.
That’s a strategy task. You can also create design tasks, research tasks, or develop your own task types to build a rich taxonomy. I encourage you to do so. It increases the fidelity of your task definitions and holistic understanding of the work. And the tagging improves the context when working with AI.
Fix
A fix is for a known expected behavior that doesn’t work as expected. Bugs, broken flows, and regressions are fixes.
Example: Exporting a search throws a 500
Improvement
An improvement is a known effort to make something better. They are often tied to quality, usability, or performance. They can roll up into bets, and they are one of the primary artifacts of bets.
Example: Simplify the search filters layout
Bet
A bet is an intentional investment in time and attention to make trade-offs to create value in the product. There are uncertainties and unknowns to address. They are projects—self-contained bodies of work that live inside the Project Engine. And they spawn ideas, tasks, fixes, and improvements. They’re generative.
Example: Report AI Summaries
The earlier idea generated tasks, transformed into a bet, and spawned more tasks and improvements. Through the build, QA, and release, it generated more ideas, fixes, and improvements. It’s generative and cyclical.
Landscape
I discussed the Product Landscape extensively in Issue #73, so I won’t go too deep here. I’ll just reiterate the importance of mapping the landscape of your product to identify the regions, zones, and context. Doing so enriches the taxonomy in the Work Registry. Labeling every issue in Linear with the region, zone, and context is like neatly filing your important documents in a filing cabinet. It’s a system. You can then more easily find what you’re looking for when you need it. That’s exactly why the Work Registry is so critical. I put a lot of energy into the system of work because I know what it gets me in return: clarity.
Priority
The last layer of classification is priority. I’m not going to pretend there’s a simple formula to set priority. Sadly, it’s more art than science. The more you develop a feel for the work, the better and faster you get at setting priority. But don’t fool yourself: priority is judgement, not math. I will always trust someone with good intuition saying I think that’s high priority over whatever a priority matrix in a spreadsheet spits out. As AI evolves, your calculus becomes asking yourself where you, the human in the loop, should be involved.
Where is your unique leverage?
I manually prioritize every issue that comes through Triage. Why? Because it’s important, and each rep makes me better, like steadily increasing weight and repetitions when you’re improving your physical fitness. This is your mental fitness for intuition. And intuition is the key ingredient in strategic AI use without sacrificing critical thinking and quality.
There are, however, some questions you can ask yourself as you make the call:
- Pressure: What happens if we don’t act?
- Leverage: What changes if we do act?
- Timing: Why now?
Linear has five priority levels. They’re standard and easy to grasp, so I stick with them.
- Urgent (P0) — Do it now.
- High (P1) — Do it soon.
- Medium (P2) — Do it when you can.
- Low (P3) — Do it at some point (maybe).
- No Priority (P4) — Don’t do it.
Everything that moves out of Triage must have a priority. If it’s a P4, it’s even tracked. And there are few P2s and P3s. Linear will auto-close stale issues, which helps trim the list. You won’t forget major bugs or great ideas, so the natural contraction is good.
Priority changes. Sometimes I mark something as a P2, but then more signals surface to increase its priority. Work is meant to be dynamic, fluid, and progressive. The Work Registry is not just a system of record; it’s a living, breathing ecosystem of knowledge.
Principles
- Order creates clarity. You can’t decide how much something matters unless you know what it is and where it lives.
- Understanding, not commitment. Classifying work is about making it legible, not making a commitment to doing anything.
- Priority is judgement. Tune your intuition to make judgement calls about the priority of work. Don’t overthink it.
Practices
- The Triple-Layer Cake: When classifying an item, apply the three layers of type, landscape, and priority. Make the classification intentional and complete.
- The One-Sentence Test: If you can say “This is a [Type] in [Region / Zone] with [Priority] priority” and it makes sense, then the work is classified clearly. If it sounds wrong or confusing, adjust it.
- Decision Reclassification: If new information emerges, scope changes, or a new decision is made, then you should reassess the classification.
Route
The goal of classification is to provide structure, but the critical moment is to then put the work in the right place. That’s exactly what you do when you Route the work. No matter what system I use for tracking work, I maintain the same lanes of work:
- Triage
- Backlog
- Queue
- In Progress
- In Review
- Done
Linear calls it the backlog, which is standard nomenclature. It’s a loaded term, so I often rename it to radar. But, given backlog is so pervasive, I often stick with it for clarity’s sake. I don’t like the term because it’s become symbolic of a list of work you’re never going to get to. While some people view it as job justification—there’s endless work to do!—it can be demoralizing when you collect too much noise in the system. But there are mitigation strategies we’ll discuss soon.
When I’m triaging issues—ones not discarded already—they either go to the Backlog or the Queue. I use a simple test:
Do we need to complete this item this week?
Yes → Queue
No → Backlog
When a P0 issue comes in, it will often bypass the steps and go straight to In Progress.
The type of work held in each lane is:
- Backlog for the list of someday and maybe items. I leave a lot of ideas and bets here to marinate and develop over time.
- Queue for the list of work this week. You can expand the time horizon beyond a week, but I’ve found it to be the right unit of time to constrain the list. And it should be right-sized to your team’s capacity. You can track how much work moves through the Queue in a week. Everything in the Queue should be assigned to someone, and only one person per issue.
- In Progress for only the issues actively being worked on. When something is blocked, move it back to the Queue or the Backlog, depending on the priority. Be mindful of how much work is in flight at one time. Match your team’s capacity. It’s a good rule of thumb to keep each person on no more than one In Progress issue at a time. I break the rule frequently, but, for engineers, technical review is still active work in their purview. It’s important to see the task through to completion without splitting focus across work.
- In Review covers work in technical review and whatever additional QA process your team has. We add a “QA Review” label to issues in review that are being QA-ed, which pulls those issues into its own dedicated view outside technical review.
- Done is for completed work. Linear has a GitHub automation that automatically updates the issue status as work moves through the pipeline. For tasks, it’s marked complete manually.
Done is the ideal end state for work, but you also need to route specific types of work to the right locations:
- Canceled when it’s not valid.
- Duplicated when it’s already tracked.
- Deleted when it’s unnecessary noise.
Linear has these mappings built into the product and part of Triage. And Triage really is the routing system. Once you classify the work, you move it to the right place.
Determining who and how to assign work is a big topic. It’s not easy. Classification helps route the work, but you need to understand your team’s skills, domain knowledge, and the highest and best use for each person. I have a weekly meeting with the VP of Engineering to discuss this. It’s nuanced. Lean into the nuance and human intuition. The more reps, the better you get. It’s not an exact science.
Principles
- Awareness, not action. Routing is about putting the work in the right place so the right energy and attention comes to it.
- Lanes are commitments. As work moves through the lanes, commitment grows from acknowledgment to completion.
- Flow beats throughput. Healthy routing limits the amount of work so your team can deliver with excellence.
Practices
- Time-Horizon Test: When triaging new work, ask if you need to work on it this week. Use that to move into the Backlog or the Queue. Keep routing consistent.
- Queued Ownership: Assign a single, direct owner when an item is moved into the Queue. The Queue is the commitment boundary for the work.
- Move Blocked Work: If it’s not able to move forward, move it to the Queue or the Backlog. Don’t clog the lanes. All work should be actionable and moving. Then you can see bottlenecks.
Evolve
A system is a living, breathing entity. The Work Registry has a pulse. Once you’ve routed work into the right places, you need to keep the system flowing as information evolves. The job of Evolve is to let work change shape as understanding increases, while keeping the history of the work in check. The output of this step is for each work item to have the correct type, scope, and intent that matches the current understanding of the work.
Tracking every work item in Linear makes this easy, but it requires discipline. During a product brainstorming meeting, we discussed an idea we were tracking. I had Granola running to take notes. I took those notes and summarized key points, refined them, and added them as a comment on the issue. There was enough information to change the shape of the issue from an idea to a bet. Based on the discussion, I saw a path to bet on an iteration of the work. I created a new task to investigate, running Claude Code to work off the issue context and scope the potential changes. I completed the strategy task, converted the issue into a project, and used the plan as the project description. When the work begins, the team has rich context to build on. That’s the key: building context, evolving the shape of work, and maintaining the historical value.
Linear auto-closes issues after 30 days of inactivity. If you’re gearing up to work on something, you’re talking about it—a lot. And if you collected, classified, and routed the work to the right place, your job is to keep it fresh with the latest information. That’s natural attrition and emergence of the right signals.
Principles
- Work is alive, not static. A healthy Work Registry has a pulse. Work should naturally change shape with your learnings.
- Evolution preserves history. Be diligent as you track the evolution of work. Write down each detail to see the history.
- Clarity is the output. Whether an idea becomes a bet or is canceled doesn’t matter when the goal is clarity.
Practices
- Learning Capture: Whenever you learn something new, write it down. Leave notes as a breadcrumb trail of knowledge.
- Type Evolution: Update the type of the work to match current understanding.
- Progress Aids: Create supporting work to move larger bodies of work forward, like a strategy task to investigate your options.
Elevate
Elevate is the final step. It turns clarity into commitment. The idea becomes an investment of resources—time, energy, money—delivered with quality to create value. Work is no longer just being explored or refined; it’s intentionally invested in. You answer:
Is this worth betting on now?
If it’s a standard fix or improvement, then it moves through the pipeline. Elevate is for your investment decisions. They’re made up an isolated set of work that flows through the same lanes, but within a concentrated body of work to deliver intentional value in the product.
But Elevate isn’t about execution—that’s what the Project Engine and Delivery Confidence are for. Those move elevated work into the delivery system. Elevate is uniquely focused on the hardest problem in software: figuring out what to build. AI can help you synthesize information, but you, the human in the loop, still need to use your intuition to make a decision. You choose what to work on.
Another key question:
If we say yes to this, what are we implicitly saying no to?
Elevation comes with trade-offs. Evolving work becomes a clearly defined bet, chosen by human judgement, and moved into the Project Engine to deliver within clear constraints.
Principles
- Elevation is an explicit decision. Once you know what the work is and why it’s important, you make a clear commitment.
- Not all work should be elevated. Fixes and improvements flow through the system without being elevated to something more.
- Clarity is the admission price. Elevation can never require certainty, but it does require understanding through a clear articulation of the problem.
Practices
- The Tradeoff Test: Ask yourself what you’re staying no to. Work doesn’t exist in a vacuum. You need to consider it all.
- Flip the Type: When you’re ready, flip the type to a bet and consider how it feels. Are you ready to make a decision?
- Project Engine Handoff: Once you elevate, move the work into execution as a dedicated project in the Project Engine.
The Throughline
Time will forever be the critical and limited resource. AI reduces risk because you can deliver working products faster than ever before. But there’s still a time investment when working with AI to maintain a level of quality and coherence in the system you’re building. I’m building features while shipping fixes and improvements, synthesizing mass amounts of information, and strategically shaping upcoming bets. It’s thrilling. I told my wife the other day that I’ve never had so much fun building in my entire career. How we build software is changing, but what we build still remains a critical decision point.
Creating tangible artifacts of work is a craft within the craft. The current of work requires a steady flow of information, continuous cultivation, and discipline in discovery. I spend a lot of my time on the Work Registry, but it’s a team effort. For information fidelity to grow, you need the collective intelligence of the team to flow through the system—the electric current like synapses carrying bits of information to regions of the brain.
It starts with Capture. Funnel all the information to a single source of truth.
Then you Classify. Layer attributes of meaning to each work item by defining the type, the landscape, and the priority.
Route the work to the right lane, and continually evaluate its position.
Evolve the work through increasing the fidelity of information. When you learn something new, write it down. Keep the history.
Finally, Elevate strategic bets into the Project Engine to execute with precision.
Build a Work Registry for intentional execution that creates value and refines quality.
Enjoying this issue? Sign up to get weekly insights in your inbox: