Intelligence Orchestration
I wrote a Markdown file detailing what I wanted to build: an email digest that pulls from a range of data sources—product, engineering, revenue, reliability systems—to provide a daily snapshot of metrics.
I started a Claude Code session and pointed it at this Markdown file. I put it in Plan Mode, which is where it doesn’t edit anything; it just drafts its own Markdown plan, asking you clarifying questions along the way. I’ve found much better results starting this way. That’s also true for humans. You need high-quality inputs to create high-quality outputs. If you take the time to capture your reasoning before you execute, you’re more likely to generate the results you’re hoping for.
I answered the follow-up questions and let Claude draft the plan. I let that run while I worked on other tasks for the day. I reviewed the plan, made edits, and then let Claude execute the plan we agreed to. It came back with the first iteration for review. In this case, I cared little about the code. This is an internal email, not a production-grade application. Normally I’d deeply vet the code, but not for this. So most of the back-and-forth conversation was to iterate on the setup, metrics, and visuals.
I knew I could run the scheduled email as a GitHub Action. And since we already use SendGrid for our emails, I could piggyback on that and use the server for sending. My engineering knowledge streamlined the process by making opinionated decisions based on my personal experience. It was a melding of human and artificial intelligence. I didn’t have to wrestle with HTML email templates by hand; or deal with the nuances of each data source’s API; or combat different email clients to create a consistent design.
I wasn’t writing code. I was designing and conducting the work like an orchestrator.
I focused on the parts of the process where my intuition, taste, and judgment were most important. The rest of the process was execution against those decisions, working collaboratively, iteratively, and in a singular focus as a collective intelligence. Doing so enabled me to build this in a day while working on a range of other things. I had a live email sent to me the following morning. I took the email and set up a new workflow in Relay to read the email, run it through an LLM to summarize and analyze it, and then post the formatted response in our team chat.
This isn’t an example of vibe-coding something into existence—of using AI to accelerate output and make builders obsolete. It’s about combining your uniquely human intelligence with artificial intelligence to create something beyond the sum of the parts. Because that’s what it feels like now: orchestrating different elements and fitting them together like puzzle pieces. Before you did this through a collection of diversely skilled individuals. Now, it’s about leveraging your own skills, knowledge, and experience alongside AI agents working in parallel. Compound that within a team and you can build more, better, and faster than ever before.
This idea is called Intelligence Orchestration, which lives in the Clarity Current of the Claritorium and Strategic Momentum of Equilio.
This idea is part two of a three-part series:
- Work Registry: How to make work visible, intentional, and actionable.
- Intelligence Orchestration: How to combine human judgment and AI leverage.
- Human Calibration: How to preserve shared understanding and quality judgment.
Intelligence Orchestration is the practice of shaping intent, assembling context, and conducting flow so multiple intelligences produce coherent, high-fidelity outcomes.
The three pillars are:
- Intent Architecture defines what should be created with constraints that matter.
- Context Assembly curates inputs so intelligence operates with meaning.
- Flow Conducting guides parallel execution so outputs converge with coherence.
Intent Architecture
What must be true for this to be worth doing?
This is the question of Intent Architecture.
You’re trying to figure out what to do and how to approach it, not one-shot a full application from a single-sentence prompt. If AI is a scaling vector, then you want to scale from a clear intention, not by multiplying errors.
Starting from long-form writing is and has always been my preferred approach. And it’s no different now with LLMs than it was translating information to humans. The same rules apply, but the correcting mechanism is much faster now. When engineers spend weeks building the wrong thing because the intention wasn’t clear, you lose time, money, and energy. When an AI agent misses the mark, you lose a lot less of those same resources. You can reset to default and try again. You can even create a Claude Team of AI agents to try different approaches in parallel. Now imagine if there were quantum computers working this way. Every possible permutation would be run and evaluated in a matter of seconds. No, we’re not there (yet), but thinking in that direction is an interesting thought experiment nonetheless.
The practice of Intent Architecture is where you, the human, architect the shape the information must take. Think of it like the container where context will live.
You’re intentionally slowing down to:
- Define the problem space.
- Set clear constraints to work in.
- Decide what success really means.
I gave a presentation to the sales team. After my presentation, the CRO told me it was the best sales training they’ve ever had.
How did I get there? Intention Architecture. By clearly establishing the right intention, I created the vessel for delivering value:
- I clarified the core problem the sales team was looking to solve: they needed to understand how to sell our API.
- I talked to one our best salespeople who has sold the API multiple times.
- I identified the gap to close so the sales team can move forward effectively.
I created a clear intention to later fill with context to orchestrate the final output.
Here you are The Architect, crafting plans in the form of intentions. You know what you’re solving, what limitations exist, and what impact you expect in the final outcome.
The Architect provides clarity of direction.
Without it, AI scales ambiguity.
Problem Framing
AI won’t tell you what problem to solve. That’s still the hardest and most high-leverage job for humans to work on. And it’s not only figuring out what problem to solve—it’s figuring out which part of the problem to solve, why now is the time to solve it, and why this problem is more important than other problems.
AI can help analyze problems, explore alternatives, and simulate outcomes—but it cannot choose which problem matters, when it matters, or why it should outrank others. Any LLM will attempt to give you those decisions, but it’s not an answer you should take without layering in your own judgement and expertise.
Problem Framing is where you influence and shape the problem space. Use AI in the process, but don’t let it dictate the outcome.
While talking to one of our best salespeople, I realized the problem is a knowledge gap. The sales team didn’t know what the API is, what it can do, or what signals to listen for when talking to prospects. The problem was clear.
Principles
- Declare the problem. The data won’t have the problem. You must choose and commit to a specific tension worth resolving.
- Narrow the scope. Scope the problem to the smallest surface area. A well-framed problem is targeted.
- Justify the timing. Every problem competes with others. Knowing why this, why now, why not something else is key.
Constraint Definition
Diamonds are formed when carbon is subjected to extremely high pressure. Constraints are the pressure turning intent into value. The intention is the carbon, the constraints the pressure. If you want diamonds, you need clear constraints.
That’s what Constraint Definition creates.
When I put together my presentation to the sales team, I considered the constraints:
- How much time I have to spend on the creating the presentation.
- How much time I need to practice it.
- How much time I have to actually give the presentation during the meeting.
- How much information I should cover.
- The technical knowledge of the group.
Constraints breed creativity. If you build on an unknown timeline, there’s no hard edge to force trade-offs. Nothing is delivered on time without trade-offs. Setting constraints is a practical and helpful way to shape your intention in a right-sized container.
Principles
- Constrain to focus. Constraints collapse infinite possibility into meaningful direction. Pressure concentrates.
- Define the boundaries. Make it clear what’s out of scope. Don’t invite drift.
- Match pressure to purpose. The right constraint depends on the goal. Know what you’re trying to achieve.
Success Criteria
If you don’t know what you’re aiming for, you won’t know when you hit the target. When I presented to the sales team, my goal was to make sure they understand what the API is and how to discover aligned opportunities so they can effectively sell it.
Success Criteria is a simple list of expected outcomes that measure success.
They can be metrics—but they don’t have to be. I didn’t set quantitative metrics for my presentation. The criteria was entirely qualitative and subjective. The success will be contingent upon future sales deals that have yet to be realized. And, even then, it would be hard to draw a direct line to the deal and the training I went through. I’m providing tools, but how they’re used isn’t up to me.
Design success criteria to match expectations, making sure the outcomes reflect the impact you’re seeking. Receiving the feedback from the CRO was enough for me.
Principles
- Make “enough” explicit. Perfection is infinite and impossible. Define a clear threshold for when done is done.
- Tie output to outcome. Deliverables are not value. Clarify the change or impact that matters.
- Predefine failure. Call out what would invalidate the effort. Clarity sharpens when failure is visible.
Context Assembly
What does intelligence need to do this well?
This is the question of Context Assembly.
When I say “intelligence,” I’m referring to the combined intelligences at your disposal—the human and the artificial. So much of the conversation is about the pendulum swinging too hard in one direction. But, as with all things in life, you need balance. Too much of anything is bad for you. AI will short-circuit your ability to think critically. You can’t prompt insights, outsource your thinking, or expect AI alone to give you direction. Left to its own devices, sycophantic LLMs will assuage your every idea (you’re absolutely right!). Making good decisions means thinking critically, acknowledging constraints, and making trade-offs. You can’t sacrifice your own judgement, but you also can’t ignore the incredible technology at your fingertips, either.
Intention Architecture is where you, the human and architect, craft a clear intention for the work. Do you use AI? Yes, but as a collaborative thought partner. You challenge assumptions, embrace uncertainty, and make your own choices about the direction.
Context Assembly is about curation. Given the container of intention you just created, what information do you need to move forward? What context is required? Too much information and you incite chaotic noise devoid of meaning. You lose the signal. The intention drew the shape, and now you have to color it in. You bring meaning, definition, and clarity to the environment. The good news is that assembling context successfully for humans and AI share the same principles.
Here you are The Curator, selecting, filtering, and structuring information to shape execution. You work after intent is clear and before rapid execution begins.
The Curator provides clarity of meaning.
Without it, AI creates divergent outputs.
Signal Curation
I had a conversation with Claude to define a set of strategic themes. We needed a better lens to view potential bets. There’s always more to do than time to do it. And AI is further highlighting the criticality of knowing what to build. As execution gets cheaper, strategy gets more expensive. To have an effective conversation with any intelligence, human or artificial, it begins by selecting the right inputs.
Signal Curation is where the right information is selected to inform the work.
Choosing what actually matters requires taste, judgement, and intuition. There’s an art to selecting information to include. For my conversation, I provided our roadmap, our work in Linear (via MCP), and a few specific call transcripts. When I felt like it was missing context, I’d add more. But start small. You can always increase the amount of information when you notice gaps in knowledge or understanding. It’s much more difficult to redact information. It’s like a judge telling a jury to ignore something they just heard. They can try, but it’s already in their headspace.
Principles
- Prefer authoritative signals. Pull from primary sources and firsthand knowledge.
- Curate for the decision at hand. Only use context that directly applies.
- Exclude aggressively. Ignore anything that doesn’t materially affect the choice.
Context Structuring
Once you select the information you want to include, consider how it’s presented. LLMs are more adept at synthesizing mass quantities of information, but it’s still important to frame it with the highest signal-to-noise ratio you can.
Context Structuring is information architecture, shaping clear signals into a usable and understand form. It’s like a Pinterest board of valuable context, structured and displayed in an easily digestible format.
The roadmap spreadsheet is structured data.
The Linear issues are structured data.
The call transcripts were left unedited on purpose, but I told it which areas to focus on.
One of the coolest parts of the process was using the Linear MCP in Claude to update issues in Linear directly. There’s also a Granola MCP for searching call transcripts. It’s getting easier to collect context, which shows up structured on arrival. Even better.
Principles
- Synthesize before you execute. Convert raw inputs into structured data.
- Surface relationships. Show how signals connect. Make relationships visible.
- Anchor in a durable artifact. Embed the information in a shared object.
Meaning Integrity
I had a call with a key customer. I shared some of the things we’re working on, what’s shipped recently, and what we’re considering. I do this regularly with key customers. I try to use the time to capture feedback. When they ask me about something I share, I turn it to them:
- How would you expect that to work?
- What other tools are in your workflow?
- Is that a problem you’re experiencing today?
This conversation yielded valuable insights, so I immediately turned to Claude. I prompted it to pull out a few key discussion topics so they could be tracked in Linear. One of the items was already an item we’re tracking, so the opportunity was to add context. The Work Registry instantiates the artifact as a container for context. The Linear issue shows the evolution of the data over time. If it gets stale, it withers and dies, like a plant without water, sunlight, or good soil to grow in. To keep it alive and thriving, you need to nurture and care for it. So I had Claude update the issue with context from the call, including the source of information.
Meaning Integrity is the process of nurturing and evolving the context. You preserve meaning, keep context fresh, and prevent drift that dilutes the value of the original idea. You want evolution without degradation—growth in the direction of progress. Sometimes you’re just reaffirming what’s there. Because it’s not always about adding more context; it’s about preventing the corruption of it.
Principles
- Separate facts from interpretation. Show what happened versus what it means.
- Preserve original intent. Avoid context drift by retaining the original intention.
- Refresh stale context. Revisit and update context as reality changes.
Flow Conducting
How do these efforts move together without collapsing into noise?
This is the question of Flow Conducting.
If you have a clear intention enriched with clarity of context, execution follows. But in a world of agentic AI and humans working together, what does that process look like?
Every Monday, I send out a batch of emails to a subset of our users for interviews. I pick out a cohort based on who I want to learn from, what metric we need to move, or what opportunities I want to explore. Once I pull the list of emails, I manually send the emails to each user. I could use our SendGrid account to send them in bulk, but it was simpler to send the 20-30 emails manually. Until it wasn’t.
Picking out the users and talking to them is where my time should be spent. Sending the emails is not. So I spun up Claude Code and built an automated outreach system. It takes a CSV of emails, uses a branded HTML email template, and sends the email through SendGrid, logging the emails in another CSV to avoid duplicate sends. Building this took less time than manually sending emails. And I was able to repurpose it for other sends, like adding waitlist users to a private beta.
Consider work in three parts:
- Figuring out what to do.
- Doing it.
- Reviewing it.
Direction (D) + Execution (E) + Review (R)
The old model: D (10%) + E (80%) + R (10%)
The new model: D (40%) + E (20%) + R (40%)
It’s a redistribution of leverage. Execution is cheap, fast, and becoming a commodity.
Judgement is the new leverage.
Flow Conducting is how you strategically orchestrate intelligence, like conducting an orchestra to perform a beautiful symphony.
Here you are The Conductor. You don’t always play the instruments (pushing pixels, writing code), but you direct, guide, and layer them into a unique harmony. You know when to jump in, when to step back, and when to approach the work in a different way.
The Conductor provides intentional delivery.
Without it, AI produces “slop” and noise.
Deliberate Allocation
AI is accessible. Major players provide easy-to-use platforms and applications. There are endless options to choose from. Figuring out what to use is the new leverage.
I use ChatGPT and Codex personally because I’ve invested in organizing context. I use Claude, Claude Code, Claude Cowork, ChatGPT, and GitHub Copilot at work.
Deliberate Allocation is knowing what tools to pick and how to use them effectively. And your human intelligence is a tool to consider. I write every word of this newsletter myself because I cherish writing and refuse to sacrifice the wonderful benefits of choosing the right words to communicate ideas. It’s how I think, develop ideas, and unlock insights. I engage ChatGPT as a thinking partner because it knows about everything I’ve written. I can get quick comparisons to other concepts.
I deliberately chose to let AI handle the emails because that’s not where I’m needed.
And even though Claude created slides for my sales presentation, I only used them as a storyboard. I designed each slide myself because I made a deliberate choice. It was worth my time to make sure the slides communicate the narrative in the best way, while helping me absorb the story.
For each issue, I decide whether to:
- Write it myself (rare, small tasks).
- Give it to GitHub Copilot for independent work to open a PR for me to review.
- Start a Claude Code session and work locally in tight collaboration.
Do you see the hidden thread?
- Letting the human do it.
- Outsourcing it fully to AI.
- Working collaboratively together.
I often do these in parallel. I work on something while Copilot is opening a PR and Claude Code is researching. Choosing how to leverage resources is about allocation.
Principles
- Assign by strength. Match the task to the intelligence best suited for the job.
- Separate exploration from refinement. Run modes of exploring and refining in separate threads. Don’t mix them.
- Keep ownership visible. Even if AI owns the execution, the human in the loop should own the direction.
Loop Discipline
One of the interesting paradoxes of AI collapsing execution time is that you often make too many changes. Scope balloons into an unmanageable mess. You follow the dangerous thread of “what about?” until you have thousands of lines of code. The previous constraint of time writing code manually disappears, making addition cheap. But that puts a strain on the downstream bottleneck of reviewing AI-generated code.
Loop Discipline is how you control the iteration cycle to avoid quality drift.
One of my favorite techniques is to use Plan Mode in Claude Code. In this mode, Claude will draft a plan without making modifications. It will often ask clarifying questions, and you can work iteratively through different implementation options. Using Claude Teams, you can even have multiple agents work in parallel and communicate with one another so you can execute with the best approach.
If I’m working on something with high unknowns, I start in Plan Mode. I can work with Claude to evaluate different strategies before committing to one. Once the work starts, I commit changes in small iterations to make rollbacks easier. Telling Claude to reset and try again is another helpful technique—one I often employed when working on major refactors in the past. There’s a lesson there: the fundamentals of work are the same, but we’re just leveraging them in new ways.
Principles
- Iterate with purpose. Solve a specific problem with each pass. Keep it isolated.
- Reset when drift appears. If outputs diverge from the intent, start again.
- Stop on the signal. Meeting the success criteria with a better version than what’s there today is when you draw the line.
Coherent Convergence
When Claude Code opens the PR, it drafts the description of the changes. I’ll often modify this because AI is an overachiever and adds more information than necessary. Humans are going to review the work, so that matters. We also have an AI reviewer take a look and add comments. Graphite is better than GitHub Copilot for PR reviews. Copilot would just hallucinate new comments on every commit, suggesting false-positive changes and inciting chaos in the review process. So we turned it off and now use Graphite. I review the changes myself and then request (human) reviews:
- Technical Review for the code.
- Design Review for the visuals.
- QA Review for the experience.
Different people handle each of the reviews, converging on the PR as the artifact.
Every intelligence gathers into a collective to make sure the work is coherent. This is the most critical part of ensuring quality and avoiding vibe-coded fragility. I diligently review every output. Just last week I watched Claude make a slew of unnecessary changes when I asked it to add one thing. The component already had the configuration option built in; it was a one-line change. But it changed four files and dozens of lines. It’s a good thing I understood the output and could direct it. My theory for why this happens is because it’s faster for AI to write net-new code than it is to review and reuse what’s there. It will only do that when you tell it, which requires a deep understanding of the code and the discipline to shepherd quality work.
Coherent Convergence is when you bring together all intelligences to create a unified and high-quality end product.
Principles
- Resolve contradictions explicitly. Conflicting outputs must be reconciled.
- Enforce a single frame. Make sure tone, structure, and assumptions align.
- Produce one source of truth. The final output should be in a single artifact.
The Throughline
The Work Registry is the system of record.
Intelligence Orchestration is how you deliver work using human and artificial intelligences in an intentional way.
Intent Architecture determines what must be true for the work to be worth doing. It’s how you define an intention, set constraints, and determine what success looks like.
Context Assembly adds fidelity in the form of context. Clarity emerges and the path forward becomes clear. Meaning permeates through your intention like a surge of electricity generating power.
Flow Conducting is an orchestration of intelligence working against the direction you established. You leverage human and machine in a symbiotic relationship to execute on a coherent solution with speed and quality.
As AI takes over execution, allocation, orchestration, and conducting becomes the new leverage. But there’s also an opportunity to integrate human intelligence in new ways. Because what’s the point of all this time saved if we can’t leverage our unique skills, experience, and knowledge on activities that bring value and energy?
That’s where we go next: Human Calibration.
Enjoying this issue? Sign up to get weekly insights in your inbox: