Context Windows
To make better decisions, you need to increase your context window
A context window in a Large Language Model (LLM) refers to the maximum amount of text (measured in tokens) the model can consider when generating responses or processing input. It’s essentially the “working memory” used by tools like Claude, ChatGPT, Gemini, and others. These LLMs have continued to expand their context windows, increasing relevancy, accuracy, and quality of responses.
The same is true for human brains.
Like LLMs, higher-quality input creates higher-quality output. Understanding context is how you adjust the conditions necessary to make better decisions. It will help you, your team, and your company work smarter.
The clarity of the input determines the quality of the output.
When it comes to thinking about your own Context Windows, you need to make sure:
- You have all the context you need.
- The context is in the right place.
- Everyone understands it.
Signal Range
You want to collect as much context as you can without reducing the signal.
This is the Signal Range.
Too much context and the noise will drown out the signal. Not enough and you don’t create any signal to work from. It’s a balance, and you have to feel your way through it.
How? By understanding decisions after they’re made. What worked? What didn’t? Did you have enough context when making the decision? Let curiosity drive you to identify what worked and what didn’t.
This is also a key component of developing taste and intuition. When you understand why something works, you begin to create more things that work. It’s a process of fine-tuning your model, which is how LLMs increase the fidelity of their output. Doing so heightens your sense for when you have the right amount of context, or when you need to go digging for more. Your Signal Range is how you intuit what’s needed.
Let curiosity be your fuel. The best way to capture more context is with questions. And the best question is the one following the previous answer. Pull the thread. There’s rich context and nuance lurking in places you may not expect. You only know when you intentionally seek out the signal. And this how you avoid context blindness, which results in misalignment and bad decisions.
My favorite method for building context to find the signal is through framed problems.
A framed problem is a problem with clear definition, where you understand the entire surface area of the problem:
- Which part of the problem you’re solving.
- Why it’s important to solve the problem.
- What solving the problem creates.
When you’re making decisions, you’re seeking resolution to a problem. So the framed problem articulates the problem in the right Signal Range to solve it.
Continuity Thread
The most common phrase I hear from engineering teams goes something like:
I know we made this decision for a reason, but I can’t remember why.
This is soul-crushing to me. It’s the result of one (or more) of the following conditions:
- You didn’t write it down.
- You don’t know where it is.
- The information is outdated.
Not writing it down is a discipline problem.
The other two? They both stem from the same root cause: no centralized and up-to-date context. Context, when properly cared for, lives in one place and is maintained to avoid outdated information. But how can this be when there are so many information channels in the modern workplace?
We return to discipline.
You have to be diligent about capturing key points of context and storing it. And then you need a rhythm for reviewing and updating it to avoid the problem of entropy, which is when information becomes stale. You can capture context in multiple places, but you need a specific location for the most important context.
This is an issue with AI today. To make the most out of AI tools, it requires an investment paid in context (LLMs, specifically). I like Claude—and still use it sparingly—but I’ve paid for and invested in ChatGPT. All my conversations, questions, and context lives there. The even bigger problem is that not everything lives there. I also use Granola for meeting notes, and Notion for everything else from a Personal Knowledge Management (PKM) perspective. And there’s a handful of tools for project communication, like Basecamp, Slack, Linear, Figma, email, etc. Unless you extract data from these tools into one context location, you create context drift and lose clarity in the signal.
My preferred method for combating this is with a Decision Log.
Log each decision you are working through or have completed in one place. Detail the problem, what options are available, and the recommended solution (the decision). Most importantly, detail the why behind the decision. This takes a short amount of time, but will dramatically increase the fidelity of context on your team. You can even do this personally. Making better decisions paired with understanding why the decisions worked (or didn’t) improves your taste. And cultivating taste in the age of AI is what keeps you positioned well for what’s ahead.
Model Transfer
When making a decision for yourself, the surface area of required context is low. You have one mental model to build: your own.
But when you’re working on a team, that changes. And it compounds dramatically with increases in team sizes—even small ones.
Each member of a team (all humans!) form their own mental models—internal representations to help make sense of information. While this is helpful for our brains, it causes problems when mental models conflict on a team. What means one thing to one team member can mean something different to another. This is where confusion, frustration, and misalignment grow.
And as a leader, when you’re the only one holding the right context (the mental models) in your head, it creates an echo chamber. That’s dangerous because you lose the ability to refine your own mental model, and the lack of shared understanding deepens disconnect.
This is where the Model Transfer comes in.
It’s about building a shared representation of the context for the team. As you heighten your intuition and sensitivity to what good context looks like, you can then build a shared mental model everyone understands.
My favorite method is visual representations—intentionally designed visuals that create context in shared environments.
When I post a Weekly Update (Issue #11), I share the same visuals, which creates a continuous transfer of the model across the team. Repetition is key. You don’t want to overwhelm, but you need to look at and talk about information the same way. This goes for visuals and language, which also serves as the raw material for mental models. Like visuals, language establishes meaning. Using language intentionally and consistently instills shared meaning on a team. When you talk about a specific project, make sure everyone calls it the same thing, writes it the same way, and knows exactly what it means.
When you’re meeting (virtually or in person), you create another opportunity to show a visual model of the context you want to share on the team. This is Collaborative Clarity (Issue #27), and it prevents the echo chambers that create team misalignment.
Context is the fuel for great decision-making
Context is a form of information. But it’s more than that. The context tells the story of information. Not just what it is, but where it came from, and why it exists. That story is everything. It’s how you make better decisions; it’s how you create better teams; it’s how you do quality work.
It starts with capturing context.
Not too much or too little—just the right amount to understand the why. Then you work to centralize the context and distill the thinking without diluting it. And, finally, share the mental model beyond yourself to scale context and drive alignment, creating an engine of clarity for everyone.
Enjoying this issue? Sign up to get weekly insights in your inbox: