Drew Barontini

Product Builder

Issue #86
16m read

Knowledge Proofs

AI is triggering a major shift. And we’re all wrestling with what that change means.

How does it affect my role?

What should I focus on?

Will my job be relevant?

AI is permeating every aspect of knowledge work—from what software is built, to how it’s built, to who’s building it.

The Claude-Codification of Work

There’s no throttle for what gets built other than the speed at which a prompt can be written. This is dangerously close to the speed of thought. Someone has an idea and they can generate any artifact. They share it, others give it to Claude to synthesize, and then coherence completely disappears. There is no line connecting the creator’s intent with the recipient’s understanding. It’s like a game of telephone, except the stakes are high enough to sink your company. How? You create a black hole of information. But what can’t escape from this particular black hole is context.

I call this The Claude-Codification of Work:

High-fidelity artifacts built on low-to-no-fidelity thinking.

Good decisions rely on rich context, but when Claude-Codification takes over, the context is obfuscated by high-fidelity outputs. Instead of meeting to brainstorm or exchange thoughts through long-form writing, you get a PDF (or Markdown file; or visual; or HTML page) from the person closest to the prompt. Sometimes it’s just a screenshot of “what Claude said.”

What’s the point of any of this if we’re just participating as hapless bystanders mindlessly parroting what a computer said?

Attention Allocation + Skill Routing

This touches on both Attention Allocation and Skill Routing. The messy, Creative Work and Exploratory Work require high expertise. And the deterministic System Work requires the Judgement Work to maintain coherence. Leverage AI, but as a tool chosen intentionally to actually provide leverage. Right now, AI isn’t creating space for me to do more of the work I enjoy. It’s just allowing me to do more, cover more surface area, and increase the overall scope of my work. When other people use it without thinking, they expect me to reverse-engineer the output to understand the thinking and maintain coherence.

Evidence Boards

I started working on a new strategy. We have three projects in various stages that all ladder up to a bigger initiative. It’s a big lift. But it’s also potentially a seismic shift in the product and business. Leverage lives there. Because I love and believe in the messy, creative process—and because I couldn’t stand to review another Claude-Coded artifact as a stand-in to represent real thinking—I put the strategy in a FigJam board. It has diagrams, sticky notes, links, screenshots, long-form writing, and anything else collected at this stage.

I’m showing my work.

I shared the link with the team. Someone messaged me an idea and I told them to “put it on the board!” They did. If you want to know the current status of the strategy, it’s there for everyone to see—open source thinking.

It’s like those big boards you see in detective shows. You know the ones with all the pictures of suspects, bits of evidence, maps, and lines connecting them. They’re often called an “evidence board.” It’s a shared mental model to represent the latest thinking about the case. The case is the problem; the solution is the outcome. The suspects can only be found guilty when all the evidence is presented, explained, and understood by a jury. They can’t spit out an AI-generated Markdown file for the jury to review. They have to walk through the reasoning, explain the thinking, and bring the jury along to convince them.

Knowledge Proofs

A “mathematical proof” is a deductive argument that shows the assumptions logically guarantee the conclusion.

In knowledge work, I call them Knowledge Proofs, an artifact of integrated thinking.

The three pillars are:

  1. Externalized Thinkingthink out loud
  2. Explicit Reasoningshow your work
  3. Verifiable Understandingmake it checkable

Externalized Thinking

Externalized Thinking (ET) requires intuition. You confront your own assumptions and biases—what’s called “metacognition”—to understand gaps in understanding. You only know what you don’t know when you engage in rigorous analysis of thought.

There are three key stages of ET:

  1. Writingturn thought into language
  2. Discussingexpose thinking to others
  3. Revisingupdate thinking with reflection

Writing

Writing is thinking. Translating loose thoughts into written words creates structured language to reason with. It’s the first stage of externalizing your thinking. What’s in your head feels and looks different when backed with language. Meaning evolves.

Start by writing. Get everything out of your head and onto paper. Write by hand, on your computer, or whatever you prefer. The friction in writing is good. Move fast enough to bring thinking to the surface, but slow enough to wrestle with the words.

For everything you write, ask yourself:

Writing a prompt is a form of writing, but it’s transient. You write it, maybe reread it a few times, and submit. At that point, you’ve abandoned the original thought and moved into a collaboration with AI. The response will change your original intention, reshaping your thinking. You need to begin with writing so your own words sit there and stare back at you, like a reflection in a mirror. You write, put it away, and then come back to it later. You’ll notice new things when you return to it. Slow thinking is intentional.

Writing is Creative Work.

Discussing

Writing prepares you for discussion. You expose the words to air. They mix, mingle, and morph as they interact with particles of collaborative discussion. Whether talking to your team or AI, you’re engaging a collective intelligence to refine your thinking.

If I’m early with an idea, I’m careful about the timing of discussing it. Why? The idea is vulnerable and volatile. Discuss it too soon and it can lose its shape and dissolve. Once the frame and shape of the idea is forged through writing, it’s safe to bring into the open and discuss it. The only caveat is when you have high trust with someone else. I’ll often bring early ideas to specific people before I’ve written about them.

This is especially true for AI. Bringing an idea to AI won’t reveal gaps in your thinking like discussing with another human. LLMs default to sycophancy and agree with you to maintain a positive relationship. You can explicitly tell it to disagree or point out gaps in your thinking, but then it’s only doing that because you instructed it to. AI doesn’t have opinions; it takes positions. Be mindful.

I’m not saying AI isn’t useful when discussing your thinking. It is! But it should be used intentionally at this stage because every LLM nudges you to create an artifact, turning early thinking into something tangible too soon.

That’s the Claude-Codification problem.

Don’t shortcut the process. Discuss your thinking with other people and leverage AI when it makes sense. Divergent thinking is required for high-fidelity understanding.

Discussing is Exploratory Work.

Revising

Revising can be both a standalone stage and an iterative element embedded across the entire process. You write and revise; you discuss and revise; you continually update your thinking with new information. And that’s true across all the stages of writing, discussing, and revising. The stages can be both sequential and cyclical.

I strongly believe the mark of intelligence is the ability to change one’s mind. I often ask people what’s something you have completely changed your mind about? It’s an insightful question in two ways:

  1. It shows you someone thinks about their own thinking (metacognition).
  2. It shows you someone is continually refining their own thinking.

Revise your thinking when you write.

Revise your thinking when you discuss.

Iterate, iterate, iterate.

Revising is Judgement Work.

Explicit Reasoning

Explicit Reasoning (ER) requires integration. You convert thinking into logic and reasoning steps to integrate your thinking with the surrounding environment. To get there, you need a deep understanding of the problem space—what the problem is, why it matters, and the core focus of the work.

I was in a meeting where a slide was shared. It was meant to illustrate, at a high level, the specific work to complete. It was a catch-all solution hoping to solve a problem.

Nobody knew the context driving the visual.

What was the thinking?

What was the reasoning?

Why this particular direction?

If you don’t know what problem you’re solving, you have no business considering solutions. You need explicit reasoning to focus energy because the problem space in complex systems is constantly evolving. Problems are moving targets and solutions are point-in-time interventions. We illustrate understanding of a topic not by stating the solution, but by showing the process that created the solution.

The Claude-Codification of Work amplifies coherence drift. The more high-fidelity artifacts built on flimsy reasoning, the more negative downstream effects. Sitting in that meeting, most of us were just assuming the slide we were looking at went through a rigorous process before creation. Sometimes it does and sometimes it doesn’t. There’s no real way to know. That’s why proofs matter.

There are three key stages of ER:

  1. Distillingnarrow the signal
  2. Framingdefine the problem
  3. Tracingshow the path

Distilling

Exposed Thinking is divergent. You open the aperture, letting information flow freely to create a breadth of understanding. Once you move into Explicit Reasoning, you apply constraints and filter the information to converge on a clear pathway.

Diverge and then converge. And repeat.

Distilling matters because almost every problem starts too big. When it doesn’t begin that way, with enough due diligence, it balloons. Bringing humans into the mix results in an endless cacophony of “what if…” questions to expand the scope. If you want to understand and be clear about your reasoning, it starts with narrowing the problem area. Chisel away at the scope until you’re at the smallest meaningful problem.

Distilling is where you show your work in the form of constrained reasoning.

Framing

When making decisions, the person closest to the problem should make the decision. That’s the person closest to the problem, not the prompt. Anyone can ask Claude; only the right person can make the best decision.

When I shape work for an engineer, I focus on the problem more than anything else because the solution is malleable. And that malleability relies on the framing.

Framing is how you take the constrained problem and position it clearly.

Why you’re solving the problem.

Why you’re solving the problem now.

Why this problem instead of other problems.

A well-framed problem is powerful. Framing shows everyone the problem area as a clear and focused objective.

Tracing

I watched my wife and oldest son argue about showing his work in math. He knew the answer and just wanted to write it down. She wanted him to show his work so the teacher would understand how he arrived at that answer. He didn’t understand why.

Maybe it’s lost on a child, but showing your work matters. The entire peer-reviewed scientific literature corpus of knowledge is built on a foundation of showing your work. Every provable thing we’ve learned about life went through a strenuous process predicated on showing your work—not just for yourself, but so others can build on it.

Tracing is how you draw a line from the reasoning that took you from your original thinking to the final conclusion. It’s your chain of thought. It’s how you reached your conclusion. And it matters because the Claude-Coded artifact masks all the thinking and reasoning that preceded it. If you’re lucky, the thinking is there but not presented. If not, you’re looking at a shallow and empty representation.

The best way to trace your reasoning is with a log along the way. Write down:

This is part of the Reasoning Log, which we’ll talk about in The Practice. For now, let’s move onto what happens when reasoning is clear.

Verifiable Understanding

Verifiable Understanding (VU) requires iteration. People won’t immediately understand or trust your idea. You have to bring them along in the thinking, which is why Exposed Thinking and Explicit Reasoning are critical to the process of clarity.

If you think out loud and show your work, understanding is warmed up along the way. But it still takes focused intention to check your work and deliver alignment. When you’re working on a team, others need to trust and reuse your thinking. AI amplifies this need when Claude-Coded artifacts proliferate.

Coherence is the new limiting factor. The teams that will succeed are the ones who will use AI without losing coherence, maintain quality, and leverage all intelligence.

Understanding persists as AI scales output.

There are three key stages of VU:

  1. Testingchallenge the reasoning
  2. Modelingcreate shared mental models
  3. Reusinglet others build on it

Testing

Staring at the sslide in that meeting, there was no space to challenge the reasoning.

The meeting owner asked “any questions?”

That was the test. Does everyone understand the reasoning behind the work?

A meeting with a lot of people isn’t a safe space to challenge reasoning. It feels like it’s already figured out. You can’t just ask “how did you decide the work?” when it’s presented as a high-fidelity artifact. If you thought out loud and showed your work, this can work.

But you can’t expect immediate reactions when thinking and reasoning require time to percolate. Slow thinking produces sturdy results. Reactions are flimsy at best.

Testing for understanding is where you review and discuss the reasoning openly with others. It’s like discussing in Externalized Thinking, but later in the process. The work has a full shape. You know what you’re doing, but now you need others to join you.

Modeling

Modeling is for shared mental models.

What does the work look like?

How do you visualize it?

What’s the form?

The work can be written text or visuals or a combination of both. I’m visual, so I default to visual imagery and minimal framing. Less is more applies. A complex process illustrated in an elaborate flowchart looks great, but is it easy for others to understand?

Repetition helps. I’ll often build visual mental models and share them in meetings. I show them in Loom updates and within regular rhythms to formalize the mental model. But feedback around understanding is hard to get from those formats alone. You have to test the understanding by asking and seeing how the information propagates through the team.

AI can help. If you did all the thinking and reasoning up to this point, a Claude-Coded artifact can serve as a working prototype to test understanding of the reasoning.

Reusing

The true test of Verifiable Understanding is reuse. Others not only understand the thinking and the reasoning, they apply it and extend it in their work. Different teams use the visual to show an idea; the language is repurposed for another sub-strategy; the key principles in the reasoning help frame a new problem.

Reusing at this stage is like a meme. It becomes information that spreads easily because of how it’s presented. The most powerful and elegant ideas spread on their own without intervention. Less is more.

The Practice

As much as I’ve been talking down the artifact-generation polluting knowledge work, artifacts are an essential component of work. They just need to be intentionally created. Use AI, but don’t abandon the uniquely human endeavors like cognition, thinking, and reasoning.

There are two templates you can use as key artifacts to think out loud, show your work, and create a shared understanding:

  1. Reasoning Log
  2. Knowledge Proof

As an aside, the Knowledge Proof is an excellent starting point for the Intent Brief.

Reasoning Log

The goal of the Reasoning Log is to capture thinking as it evolves.

The five parts:

  1. Objective: What am I trying to figure out?
  2. Actions: What did I do?
  3. Observations: What did I notice?
  4. Insights: What did I learn? ****
  5. Next Steps: What will I try next?

And here’s a simple product example. The scenario: low activation rate in onboarding.

1. Objective

Understand why new users aren’t completing onboarding.

2. Actions

3. Observations

4. Insights

5. Next Steps

Knowledge Proof

A Knowledge Proof is like a compressed research paper, but adapted for speed and decision-making instead of publication.

The goal is to turn thinking into something others can trust and act on.

The five parts:

  1. Claim: What are we asserting?
  2. Context: What situation or problem does this apply to?
  3. Assumptions: What must be true for this to hold?
  4. Reasoning: How did we get here? (steps, logic, evidence, trade-offs)
  5. Conclusion: What follows? What should be done?

Following the same example scenario above, here’s what an example could look like.

1. Claim

Reducing upfront friction in onboarding will increase activation rate.

2. Context

Activation rate is currently low, with a significant drop-off during onboarding. Improving activation is critical for downstream retention and conversion.

3. Assumptions

4. Reasoning

5. Conclusion

Simplify step 2 by reducing required inputs and adding guided defaults. Test a lighter onboarding flow with optional configuration deferred.

The Throughline

Knowledge Proofs flow between two phases:

Exploration + Validation

Exploration is where you go wide, engage divergent thinking, and follow the signals.

Validation is when you apply reasoning, make assumptions clear, and test your ideas.

The three key phases move you from thinking to reasoning to understanding. You leverage AI where it makes sense to expand possibilities without losing coherence. You avoid the scourge of The Claude-Codification of Work, where everyone on your team is just passing around artifacts that look good, but don’t contain any depth of reasoning required to create clarity. A polished output with shallow inputs is like a house of cards; it looks cool, but will fall apart easily. The polished artifact should only show up after the messy and divergent thinking is complete.

Knowledge Proofs are a new concept to bring thinking, reasoning, and understanding back to the foundation of knowledge work.

Connected Ideas

Knowledge Proofs lives in the Clarity Climate of the Claritorium and Quality Refinement of Equilio. And it connects to other ideas:

  1. Context Windowsgather context
  2. Artifact Miningcollect signals
  3. Collaborative Clarityshared visibility
  4. Attention Allocationhuman attention
  5. Project Engine (Intent Brief) → delivery intent
Clarity Climate Quality Refinement

Enjoying this issue? Sign up to get weekly insights in your inbox: