Human Calibration
The Work Registry is a sustainable system for tracking work to make it visible. Bringing work to the surface is the only way you can begin to reason with it, much like writing your thoughts out is how you interrogate your own thinking.
Intelligence Orchestration is how you leverage the collective intelligences—human and artificial—to create value and refine quality. If you work intentionally, you can do more work while carving out space to do the highest-leverage work you most enjoy.
But making work visible and guiding intelligence doesn’t matter if the humans at the center quietly decay. Execution shrinks and output expands exponentially, creating an environment where understanding is lost and judgement is degraded. Your product becomes a sterile shell, drowning in the sea of mediocre vibe-coded software.
So the real question becomes:
What happens to humans when execution gets easier?
Or, said in another way:
How can you preserve shared understanding and quality judgment at scale?
In Intelligence Orchestration, I posited the distribution of work as follows:
Direction (D) + Execution (E) + Review (R)
The old model:
D (10%) + E (80%) + R (10%)
The new model:
D (40%) + E (20%) + R (40%)
It’s a redistribution of leverage. The value of human intelligence is choosing what to build and making sure quality work comes out the other side with AI as an accelerant.
Judgement is the new leverage.
Judgment = Intuition × Taste × Decisiveness
Intuition is your feel for the work.
Taste is your discernment of quality.
Decisiveness is your willingness to choose.
Together, they refine judgement, making it easier to know:
- What’s worth paying attention to.
- What’s worth doing.
- What’s not good enough.
AI amplifies execution, but execution alone doesn’t guarantee successful outcomes.
Judgement does. And judgement is a nuanced, messy, human endeavor.
This idea is called Human Calibration, which lives in the Clarity Climate of the Claritorium and Quality Refinement of Equilio.
This idea is part three of a three-part series:
- Work Registry: How to make work visible, intentional, and actionable.
- Intelligence Orchestration: How to combine human judgment and AI leverage.
- Human Calibration: How to preserve shared understanding and quality judgment.
The three pillars are:
- Shared Understanding is how you maintain your team’s clarity.
- Judgement Integrity is how you maintain integrity in your team’s judgement.
- Product Immersion is how you maintain quality in the product you build.
Shared Understanding
Real people use your products—living, breathing human beings with emotions, feelings, and motivations. To know and understand what they need, you need human builders to dig into the chaos of feedback.
I was reviewing user feedback recently. As always, it was a mixed bag: some insightful, some downright unnecessary. When you lead with your intuition, it only works when you’re open to the signals. That’s the only way to feel the work—to sense what’s worth paying attention to. The more you do it, the better intuition gets, like compounding interest.
So when I saw the signals of similar feedback, I turned to Claude. I have a special project loaded with custom instructions and curated context. I use it for strategy work. I have the Linear MCP connected, which allows me to read all of our Linear data and make changes inside of Claude. It’s magical. And since Linear is our Work Registry, the context is organized and intentionally designed. It’s collected in one place to remove the “where should this go?” decision. It’s an automatic answer, so the team never has to guess. If it’s relevant to our work, we add it.
I flagged feedback and told Claude:
Take this feedback and pull out issues documented in Linear; or create new issues to track.
It found four issues related to the feedback I provided and added comments as additional context to each issue. This forms a chain of thought on each work artifact. It exposes the thought process and causal links created with new inputs. The issue becomes the atomic unit of work, showcasing the historical progression of understanding—a timeline of clarity.
When work passes through phases of discovery, design, and development, understanding must come with it. If, at any point, understanding is lost or broken, then work decays. Knowing how you got there is the best way to create shared understanding. To make better decisions throughout the lifecycle of work, you need a holistic understanding of the decisions made up to that point.
Shared Understanding protects direction.
The goal is alignment while answering the question are we building the right thing?
It starts with a clear intention, framed with structure, and enriched with context.
Intent Clarity
Intent Architecture within Intelligence Orchestration forms the same foundation to create a shared understanding with your team:
- Define the problem space.
- Set clear constraints to work in.
- Decide what success really means.
Your intention is what you want to do. It’s not a list of requirements unnecessarily boxing your team into making bad decisions.
An intention shapes decisions through direction, not dictation.
You know where you need to get, but you can fill in the details as you go. It’s all guesswork until you do the work, so make sure your team is clear on the intention:
Motivation + Constraints + Outcomes
The motivation tells you why.
The constraints tell you the how.
The outcomes tell you the what and where.
If AI accelerates execution, then there’s less time for understanding to naturally emerge. If you track this in your Work Registry, then you build a coherent story—a clear intention—to form the foundation of work.
Using AI, you can push context into one place, keep it updated, and surface themes. Using Claude and connected MCPs like Granola, Linear, and PostHog, I can quickly surface context in a single Linear issue. The team can communicate asynchronously or in a real-time collaboration session. Either way, the artifact is enriched with more context, meaning, and understanding. The messy human debates and tensions are still an important and enjoyable part of the process; humans are social creatures after all. The AI tools just help alleviate the pain of capturing what was said and putting it in the right place. Too many productive conversations in chats and meetings end without next steps. AI makes that part painless. It’s transformative.
Intent Clarity is how you define the direction of work with clear parameters.
If execution is fast, intention must be explicit.
Shape Framing
As you develop your intention, the “how will we do this?” questions naturally emerge. That’s a good thing! And this is another area where AI is changing the equation.
There’s one particular feature in our product that needs some love. It’s still used, but not how it was originally designed. We know we need to pivot it, but a lot of the foundation is already in place. I knew there has to be a quick win to use what’s existing, but slightly alter it in order to cover the real-world use.
So I turned to Claude Code.
I started a new session, detailing the idea and intention for the work. Inherent to the intention were the associated constraints. Like humans, if you let AI run amok without constraints, it will do bad things. I told Claude what I wanted. I used Plan Mode, so it came back with an implementation plan. I went back and forth, iterating with it.
I added the plan to the Linear issue and shared with the engineers to discuss. Oh, and this plan was a detailed outline of what files to change, how many lines of code need to be added, and the exact process Claude would take. That level of fidelity is unmatched. And the Work Registry made it easy to track the latest context in one place. Then we could have a conversation to hash out the nuances of the plan. The engineers hold deep tribal knowledge and experience, which increases the fidelity of the plan before you hand off the work to an AI agent and/or engineer. Because AI will still make unnecessary changes or miss small nuances. That’s where humans can correct. But it saved days of time doing it this way versus sending an engineer off on a blind spelunking mission. For a human engineer to come up with that plan, they’d have to write every line of code while building the feature.
Human skill, experience, and knowledge is still necessary, but deployed in a new way.
Shape Framing is exploring options, solutions, and strategies before execution.
Explore widely before you build quickly.
Context Embedding
An engineer shared the deploy URL of a new feature. I reviewed the work and, as I’m known to do, shared a Loom video with all of my feedback. I like to think out loud. Why? It’s a way of exposing your thinking in a way that brings others along. Context is shared freely. And ideas just have a different feel when you say the words out loud.
I downloaded the transcript of the Loom video, gave it to Claude, and worked with it to draft a detailed Linear issue with all of the feedback: fixes, improvements, questions. I created sub-issues in Linear for each piece of feedback to address. Not everything needs to be fixed; sometimes we cancel an issue or move it to the backlog. It’s a discussion. AI pulled out the information I shared in the Loom video so we can work through it. It’s easier than ever to capture context. You should do so in order to have productive debates, discussions, and live sessions to make the work better. AI can write the code, but it needs you for the direction and the review.
I was reviewing the Pull Request—code changes and a live URL to test. It wasn’t a Figma mockup or a document. It was live, working code where the entire team can converge on the real thing. They can use it, discuss it, and make it better. Context is information, but also environment. You should always avoid simulation whenever possible. Work in the real environment.
Context Embedding is when relevant information is visible, useable, and reliable.
Make thinking visible where the work lives.
Judgment Integrity
If I told you that your team could ship 10 new features tomorrow, what would you say? If it were me answering that question, I’d point out the two bottlenecks:
- Direction: How to choose what to build.
- Review: How to evaluate the work.
AI increases speed, but also:
- Output Volume: Amount of work.
- Surface Area: Breadth of changes.
- Options: Possibilities to build.
When execution is cheap, direction and review become expensive. That introduces new risk, but also new opportunities for humans to engage with the work in new ways.
Every single engineering team I’ve worked on has shared the same problem: it takes too long to review Pull Requests—a trope so common, I jokingly use it as an example in every retrospective I’ve facilitated. It’s always true.
AI and agentic engineering isn’t helping.
It’s increasing the burden, even with AI review tools like Graphite or GitHub Copilot. In fact, those tools can add to the burden when they hallucinate or give false-positive feedback. I don’t have a solve today, but I believe it’s the same it’s always been: find ways for your engineers to invest quality time in reviewing code regularly. Work it into your rhythms, individually and as a team. As more and more code is written by AI, engineering time can focus on thorough reviews to maintain code quality.
Judgement Integrity protects evaluation.
The goal is discernment while answering the question is this built well?
It is the shift from focusing on execution to focusing on evaluation. The craft output remains the same, but the process and interactions of the team change.
Lens Rotation
Product work has long lived inside of intentional specializations like design, engineering, and strategy. The “product trio” of a PM, designer, and engineer became the standard team composition. Each role on the team represents their discipline, working in a cross-functional capacity to create value.
The PM talks to users and stakeholders, aligning user desires and business objectives.
The designer creates visuals and flows to bring shape and structure to the work.
The engineer vets the technical feasibility and builds the real solution in code.
They work cross-functionally, but through a series of hand-offs and translations, passing work through the stages. In the process, work often degrades and value drifts. Why? Because you can’t slice up a holistic and connected entity like software into separate segments. It’s much like they do in medicine: every organ has a doctor, yet the entire human body works as one complex system. It’s reductionism.
This all stems from the Industrial Revolution and the rise of assembly lines. Each person has a job, and work moves step by step to completion. But this is knowledge work, not manual labor. And strategy doesn’t end when it moves to design; design doesn’t end when it moves to engineering; engineering doesn’t end when the work is delivered. It’s cyclical. Each part of the process weaves together. The best teams and products know this. They work together on converged artifacts to bring the work to life and deliver coherent value.
Lens Rotation is a multi-dimensional evaluation of the visual, the functional, and the structural. Judgement expands beyond specialization. There aren’t silos of design, engineering, and strategy. It’s a collective of holistic perspectives.
The Visual Lens is how it appears. This perspective protects perceived quality.
The Functional Lens is how it behaves. This perspective protects experiential integrity.
The Technical Lens is how it’s built. This perspective protects long-term resilience.
User Perception + User Interaction + System Sustainability
Instead of your team reviewing through their primary lens, the work is now viewed as a consistent whole. It’s collectivism. And it matters more than ever with AI because the output is created so fast. Designers can’t over-index on visual, engineers on structural, PMs and strategists on functional. If the review is too narrow, AI amplifies blind spots.
Quality requires multiple ways of seeing the same work.
Intent Distance
Most of our larger product changes go through QA, which runs through a dedicated team and process. There is one QA person assigned to our product, and I work with them on what to prioritize in the review process. When a feature is ready for them to review, I record a Loom walkthrough of the feature. I explain what’s changed, why, and how to test it. I’m providing context to avoid reported issues I have to decline as unrelated. This helps, but there’s also value in QA ignorance. You want them to know what to look at, but also to think like a user experiencing it for the first time. It’s a balance you have to strike.
There are two distortions reviewing work:
- Over-context bias when you know too much about how and why something was built, so you ignore friction because you understand the intention.
- Under-context blindness when you don’t know enough to evaluate meaningfully, so you focus on too many surface issues.
Intent Distance is about regulating proximity to intention. You don’t want zero context, but you also don’t want complete understanding.
If AI accelerates output and collects context more easily, you can miss the inherent friction users experience when using a feature for the first time. This is The Curse of Knowledge, where you assume others possess the same knowledge you do.
You want QA (and other testers) to understand the intention, but still behave like a user. This increases Delivery Confidence in an AI world.
Try working in one of two modes:
- Context-Aware Mode where you evaluate the intention against the reality. Does this do what we set out to do?
- Context-Free Exploration where you use it without thinking of the intention. Does it feel intuitive without explanation?
Understand the intention, but test the experience without it.
Reality Priority
I’ve worked with many talented designers who, over time, develop their own version of the product as static mockups.
I call this environment “Fake Figma”.
A designer shared a mockup as an example adjustment to a feature being built. The problem? The mockup was of an old version of the interface. Fake Figma is either behind or ahead, living in a simulation outside the reality of the live environment.
AI is making this worse. Even as designers leverage AI tooling in Figma, the speed of creation outpaces their ability to design in Figma before something is live. Yet designers are still critical to both direction and review.
But living in a Fake Figma simulation is not the way. The mockups clean up all the messy edges and caked-on dust in a live product. A production software product always has its edges and corners that are not quite where you want them to be. That’s part of the process. And everyone on the team, including designers, need to live in that reality while striving for improved quality.
Reality Priority is about reviewing and living in the thing that pushes back—not the Fake Figma, but a live version of the product.
Figma still has its place, but you can’t view it as the source of truth. It’s a guess. And this extends even beyond designers.
Engineers should test the live environment like a user, not just by reviewing the code.
QA should explore behavior beyond the script and process they’ve written.
And designers should use the live environment, not their own Figma branch.
Simulation is comfortable; reality is corrective. It doesn’t mean you shouldn’t use Figma or prototype. You just need to move quickly into reality where tension, friction, and trade-offs are visible, tangible, and felt by the team.
Review what exists under constraint, not what thrives in simulation.
Product Perspective
As AI accelerates outputs, the risk is fragmentation. When I work on a product, I view my job as holding the line on quality and coherence. Shipping new features is great, but ship too many—and the wrong ones—and you risk quality drift and value loss. You also have to balance adding new value and refining quality with fixes and improvements. This happens because teams, no matter what, end up isolated. Even if you pair up designers and engineers on a feature, there’s always segmented work happening in different lanes. One of our engineers refactored how data is loaded in one part of the product to improve performance. Another engineer worked in the same area on an another feature. When the new feature was demoed in a team meeting, the engineer who worked on the refactor asked about how the data was being loaded. Why? Because he changed it to improve performance. So that forced an additional week of work to get the new feature working. I should have caught it, but this is a natural part of the process. Products drift and lose coherence, like shifting tectonic plates. You need to be mindful and avoid an earthquake.
Product Perspective protects coherence.
The goal is visibility while answering the question how does this fit?
It’s how you bring holistic awareness to the team. Everyone starts to look at each other’s work and collaborate more closely. Even if I view coherence as my job, it’s really a team effort. It takes everyone working together as a single, collective unit.
Product Immersion
Everyone on the team should use the product. “Dogfooding” is the common term in the software world, referring to using your product while you build the product. Doing so builds your product knowledge and increases your sensitivity to friction users experience. One of my favorite new features in our product came from one engineer experiencing friction when using the product. When you immerse yourself in the product, opportunities arise. It seems so simple, but it’s easy to get lost in the myopic focus on individual features, bugs, and continuous improvements in the product. Fix one bug, move onto the next.
Product Immersion is living inside the product—not a version of the product in Fake Figma, upcoming branched versions, or entrenched in known intentions.
Regularly walk through your product as a team, use it individually, and leverage agentic engineering to rapidly prototype ideas. Letting AI generate code still requires human judgement and shepherding of quality, but concepts can come from anyone. If someone has an idea, create the concept as a working demo. Because you’ll only know if something works when it’s living in the real product, not a simulation.
Perspective deepens when you live inside the product, not outside it.
Intuitive Exploration
When you immerse yourself in the product, you should explore. Don’t stick to predetermined specs and flows. Click around, break things, find friction. That’s what a user would do. I’m just shy of six months into my experience with our product and I’m still finding new pages and features.
Intuitive Exploration relates to Intent Distance, but it’s more systemic. Instead of focusing on a single artifact, you work across boundaries and explore openly. All friction points happen with the interplay of features in the product. Explore how features interact, finding the workflows in the product. That’s where the value resides.
Let curiosity roam the system; friction reveals what plans conceal.
Shared Friction
It’s astonishing how rapidly agentic engineering can produce something useable. The one-shot examples are often overstated. You have to iterate, but the time saved is still incredible. I’ve found success with the iterative approach: break down the problem into smaller, releasable chunks of value. The smaller the scope, the more focused the changes, and the easier the review. That reduces the review bottleneck, too. Win!
I took this approach with some key interface changes in the product to move it towards a more consistent and unified experience. Instead of building it all at once, we’ve been slowly introducing the change in parts, testing them, iterating, and moving onto the next. Living in the friction and awkwardness of the process is important.
Shared Friction is when everyone experiences the same tensions.
The designers know about technical debt in the product. They don’t live in Fake Figma.
The engineers know about user feedback and pain points. They don’t ignore reality.
Everyone reviews metrics—positive and negative—to understand the current state.
Quality strengthens when constraints are felt collectively, not privately.
The Throughline
The Work Registry makes work visible as an externalized memory.
Intelligence Orchestration guides intelligence in a leveraged direction.
Human Calibration keeps humans sharp through preserved judgement.
Memory → Coordination → Judgement
Together, they form The AI Layer of modern product work.
When execution collapses, it’s time to rethink the model. We need to live in a world where human and artificial intelligences work in tandem to deliver value. When execution is a commodity everyone possesses, your ability to define direction and judge quality matters.
One thing we know for sure: building software is changing. What matters is your ability to adapt, evolve, and co-create valuable products with AI. The good news is the same fundamental principles still matter. How those principles are expressed changes daily, and this architecture forms a foundational thinking model for what’s ahead.
Are you ready?
Enjoying this issue? Sign up to get weekly insights in your inbox: