Drew Barontini

Product Builder

Issue #59
16m read

Product Forge

I was interviewing a user recently. I reached out because they submitted feedback expressing frustration with issues in the product. Through the conversation, I uncovered insights that mixed with product ideas I was already working through. So the conversation steered toward these overlapping ideas. The additional input of information increased the fidelity and understanding of the idea, like shining a flashlight in a dark room.

On a different side of the feedback spectrum, I connected with sales and customer success to discuss a product issue with tracking usage data for B2B (business) customers.

Quick sidebar: Treating B2B and B2C (individuals) as distinct and entirely different is a false dichotomy. What is B2B if not a collection of B2C users? They just use the product individually as a collective, which opens up specific use-cases, not a different product. Products should segment by types of users, but viewed through a holistic lens of the product in aggregate.

In this isolated example, we have competing priorities. There’s an individual customer experiencing a pain point in his workflow. And there are business customers experiencing a pain point with tracking how their team is using the product. The usual logic dictates focusing on the business because they pay more. The logic follows that losing them means a larger loss to the business and the bottom line. But we also don’t know the scale of that individual user’s issue. In a one-to-one comparison, yes, but indirectly? Maybe not. There’s a high likelihood the one frustrated user is a vocal minority representing a quiet majority of dissatisfied customers who leave the product—or don’t try it in the first place because they experienced the same issue. You don’t know for sure. Fixing this one customer’s problem could drastically shift the landscape of your product’s success.

When building products, there’s no shortage of what you can do. Feedback pours in every direction, forming an endless cacophony of ideas, problems, and opportunities. The challenge is determining what you should build, how to build it, and delivering it on time with excellence.

Enter The Map, The Bell, and The Hammer, a continuous methodology for transforming raw ideas into impactful outcomes—a way of sifting through noise, finding signals, and validating assumptions through iterative learning.

This is idea is called the Product Forge:

  1. The Map captures raw data and constrains it to high-level product outcomes.
  2. The Bell prioritizes signals into bets, experiments, learnings, and future work.
  3. The Hammer executes work by isolating it and moving it through the pipeline.

(This idea lives in the 🦉 Clarity Codex of the Claritorium.)

The Map

In her book Continuous Discovery Habits, Teresa Torres uses “Opportunity Solution Trees” to focus discovery efforts against key business objectives. There are different flavors, but they have five key components:

  1. Outcomes
  2. Opportunities
  3. Solutions
  4. Assumptions
  5. Experiments

I use a compressed version of Opportunity Solution Trees I call the Outcome Map.

There are three parts:

  1. Motivations
  2. Assumptions
  3. Possibilities

The Map is all about exploration. And, yes, I love acronyms, so it spells MAP.

Outcomes

You start with an outcome. This can be an OKR or a key metric outcome like increasing retention. The constraint focuses your discovery efforts to limit the scope and group your findings. It also helps segment the customers you reach out to. If you’re looking to improve retention, you should look at both the users that leave and the users that stay. Finding what works is just as important as finding out what doesn’t. Often times, it’s more important. Find the bright spots, figure out what’s working, and multiply it.

Motivations

The motivation is any problem, idea, issue, or pain point. Problems are opportunities. If you don’t frame it this way in product work, the massive weight of endless problems will suffocate you. You can’t and shouldn’t solve every one. Reframe problems as opportunities focused against a key outcome. The motivation is the driving force you need to investigate and understand.

Outcome: Increase retention.

Motivation: I don’t know how to get started.

Assumptions

Assumptions are the most critical part of the discovery process. The language of assumptions lurks in conversations:

It’s okay to make assumptions based on intuition, but it’s not okay to voice an assumption guised as a fact. It’s not. It’s something you think, but don’t know.

“How do you know that?” is the right rebuttal to assumptions. Always vet the source of information. Validate every assumption.

Why? Biases. There are countless examples where cognitive biases emerge secretly when making assumptions. Here are some of the most common ones in product work:

The most dangerous element of cognitive biases is how easy they are to miss. You might not even realize when one is activated. So you have to diligently engage critical thinking and validate all of your assumptions. Building a guided onboarding experience to solve the problem of users not knowing how to get started is one such assumption. You must validate it.

Outcome: Increase retention.

Motivation: I don’t know how to get started.

Assumption: Users need help getting started with the product.

Even though we heard that users don’t know how to get started, it’s still an assumption they need help getting started. And assumptions must be validated.

Possibilities

Possibilities are where you shift from the problem and opportunity space into what you can do about it. A quick warning: Don’t leave the problem space until you deeply understand the problem and why it’s important to solve it. As John Dewey said, “a problem well-put is half-solved.”

The solution space is a great place for divergent thinking. And don’t just limit it to people branded “creative thinkers.” Everyone can think creatively, and everyone brings a unique perspective based on how they’re working in the product: strategists, designers, engineers, sales, marketing, customer success. The longer you work on a product, the less divergent your thinking becomes as you lose the novelty. Go wide before you go deep to develop diverse solutions.

Outcome: Increase retention.

Motivation: I don’t know how to get started.

Assumption: Users need help getting started with the product.

Possibility: Add guided onboarding.

You can come up with several possibilities, but we’ll use just one here for illustration.

The Bell

In the 1970s, technology and product development teams borrowed the concept of “roadmaps” from literal road maps in cartography to visually represent plans, milestones, and timelines for product and technology evolution. Motorola formalized the concept in 1987 with what they called their “technology-roadmap process.”

My contrarian view is that the term “roadmap” is not appropriate for how it’s used today. A map is about the possibilities and different directions you can go, constrained by the terrain. But a modern product roadmap is not about possibilities. It’s a list of commitments—the chosen route on the map, not the map itself. However, the memetic properties of “roadmap” created ubiquity in software development. While everyone has their own flavor, a roadmap often represents:

  1. Now: Things you’re working right now.
  2. Next: Things you’re planning on working on after.
  3. Later: Things you’re considering working on in the future.

The metaphor resonates because people like maps. Why? They tell you where you can go and how you can get there. They’re predictable. And humans love predictability. Our brains are pattern-matching machines designed to (try to) predict future behavior. But if you’ve been in software long enough, it’s anything but predictable. It’s full of “unknown unknowns”—the unknowns you don’t even know exist yet. Such work is ripe with complexity and variation, which destabilizes your ability to plan, predict, or know what the future holds.

This all points to a more dangerous theme in software development:

More planning = more predictability

It’s all guesswork until you do the work. Planning doesn’t change that.

The Bell is how you move from exploration to decision. You take a mass of data and use it to make a bet on what to do. When I say data, I’m referring to:

  1. Quantitative data like usage and financial metrics.
  2. Qualitative data like user interviews and stakeholder feedback.
  3. Intuition shaped by skills, experiences, and overall knowledge.

Intuition is a data point. Honed correctly, it can be the strongest one. Many of the best product decisions were made not because of metrics, but because of a strong internal intuition. I say this because intuition is a keystone component of The Bell.

So what exactly is the The Bell? It’s an acronym!

Because of the cultural stickiness of roadmaps, I still call it the Product Roadmap. But I think of it as the Launch List. Choose the name that aligns with your organization. You just need space to collect and document the signals you capture from the Outcome Map.

Now let’s explore The Bell components.

Bets

Bets define the Now and Next lists, which we talked about when digging into the history of the roadmap. The bets language maps the right language to the context. When you decide to work on a project, you’re committing resources to it. In order to complete the work on time, you must be willing to let the team focus and make decisions in the process. It requires a bet. Stakeholders naturally use this language, and it’s an effective way to frame the work.

Priority Engine

In order to prioritize the bets, you need a decision-making framework. Well, you need a way to think about how to do one thing versus another. Let’s just call it a decision-making framework.

I call it the Priority Engine. There are five key parts that make up the formula:

  1. (A) Alignment Driver: How aligned the work is to business outcomes.
  2. (R) Revenue Driver: How much the work increases revenue.
  3. (Q) Quality Driver: How much the work improves the quality of the product.
  4. (V) Volume Driver: How frequent the work is requested.
  5. (I) Intuition Multiplier: How you feel about the criticality of the work.

Collectively, this gives the formula:

Score = ( A + R + Q + V ) * I

Give each of the five parameters a 1-5 rating and you get a solid scoring system. It’s not perfect—and you should never solely rely on weighted rankings in a spreadsheet—but the Intuition Multiplier is an effective way to offset a bet worth pursuing despite the other data.

Phases & Constraints

For any work in the Now list, there are three phases the work can be in:

  1. Discovery: Figuring out what the solution could look like.
  2. Design: Figuring out how to approach and build the work.
  3. Development: Building the work rapidly and iteratively.

It’s important to decide how much capacity your team has for Now work. You should clearly specify how much work can be in Discovery, Design, and Development. That way, when a stakeholder requests adding something, you have a clear response:

We already have two projects in discovery, which is at capacity. Should we deprioritize one of them to work on this new project? Or can it wait until X?

Stakeholders are additive entities. If you don’t surface the trade-offs when adding new things, you’ll always play from behind. Software is an infinite game in need of infinite strategies.

Experiments

Building software is complex, riddled with unknowns, and comes at a high cost. Software engineers ain’t cheap. Experiments de-risk the work. More specifically, we’re running an experiment to validate the stated assumption (or assumptions).

The key is to pick the experiment that gives you the fastest, cheapest validation with the least amount of engineering resources. Start small, learn, then iterate. You can even iteratively advance the experiments as you gradually de-risk each level.

Assumption: Users need help getting started with the product.

Experiment #1: Show a one-question survey to new users after their first session.

If enough users indicate trouble getting started, you can combine the qualitative data with quantitative usage data and decide the next experiment. If the survey didn’t give you enough confidence to validate your assumption and move forward with the stated solution, you can learn more in another experiment:

Experiment #2: Manually guide 5 users through their first session on a video call.

Same decision point here. Have enough confidence? Move forward. If not, go again:

Experiment #3: Send a series of short onboarding emails with tips on getting started.

Follow this process iteratively until you have enough confidence to move forward with your solution. You can even pivot (should even!) when a new or different solution emerges based on assumption testing.

Learning is never failure; only learning.

A word of warning here: Don’t get paralyzed by indecision. And don’t spend endless hours trying to de-risk the work. Move forward when you have reasonable confidence. Small development efforts often outweigh long-running discovery efforts.

I once worked in a product org obsessed with customer interviews. That sounds good, right? It was, until I realized the discovery process carried on for months in order to justify months- or years-long development projects. Instead, you can work on a smaller version of the project, release it, get feedback, and move onto the next part. This process wasn’t really de-risking the work or saving money—not to mention how quickly technology moves. It’s hard to stay adaptable when you commit to development work on lengthy timelines.

Ledger

As you complete projects and experiments, you need to document them. This is what the Ledger is for. You detail each project, when it started, when it was completed, and key learnings like how a metric moved as a result. It’s simple, but effective.

Later

I used to keep Later items (the project backlog, if you will) in the core Bets list. But then I realized it just clogs up the most active work. So I moved it to a dedicated space. I keep the Launch List as a spreadsheet, so this is the fourth tab.

With the work prioritized, it’s time to execute.

The Hammer

Projects in the Now list also live in whatever issue-tracking software you’re using. Linear is my current tool-of-choice. And this is not just for development work. The projects, even in the discovery phase, move into Linear. It’s a great way to add issues to keep discovery and design work constrained.

Our design team was working on a redesign of a feature. After they explored enough concepts, they felt like they were spinning their wheels without forward momentum.

So I had an idea.

I wrote a pitch document to define clear scopes of work, spun up a Linear project, and let the design team use that to constrain the work. Creativity lives in constraints. I don’t like to engage with high-fidelity design until much later in the process to avoid waterfall. But this work was already in flight, so I decided to help support it this way.

It worked! Design focused on the scopes I defined and reached their definition of done.

When it comes to development projects—and general fixes and improvements—I use a consistent set of categories to track the work:

When you’re working on a project, you break the work down into independent scopes and individual issues that flow through the same pipeline. When you work on a single issue, it moves directly through the same pipeline, too.

Consistency matters. This is the Build System.

The Lifecycle

While the process from idea to impact may feel uni-directional, it’s actually a bi-directional, cyclical process. Delivered work informs future work. Learning is a reinforcement process.

First, a motivation emerges:

I don’t know how to use settings to improve the AI response.

We’re trying to increase retention as a key business outcome, so we map it under that outcome in the Outcome Map.

There are endless possibilities to solve this:

But these solutions are meaningless in isolation. Let’s add context:

While adding documentation could help, what if we recorded a Loom-style demo video to show how to use the feature? We could not only embed the video in the product, but also use it in documentation and customer outreach and training. There’s the leverage!

And bonus: it’s a lower lift because one product person can do it.

New solution: Record settings demo video.

All solutions carry assumptions, and we’re assuming that users will watch a video and increase their knowledge of settings to use them more consistently.

How do we validate that assumption?

We could:

This is a small enough effort that we can release the video and compare video metrics with the usage and application of settings. Our hypothesis is that users will watch the video, better understand settings, and use them, which will improve product usage and ladder up to better retention numbers.

Experiment: Release settings demo video and track video metrics and settings usage.

This is tracked in the Experiments tab of the Launch List. Then we create the issues in the build system and complete it, circling back to the Ledger to document the learnings.

People who watched the video all the way through did use settings more. But not enough people watched the video.

A new opportunity emerges:

I want to quickly learn the settings to use them more effectively.

And a new set of possibilities, assumptions, and experiments are born. Or, maybe we place a bet on a larger effort to redesign settings.

The Product Forge is a dynamic movement from sense to signal to system. It’s a flywheel of product growth, innovation, and impact.

The Map reveals where new value could be created in the Outcome Map.

The Bell determines what’s worth pursuing next in the Launch List.

The Hammer turns those decisions into shipped results in the Build System.

And then it begins again.

Clarity Codex Value Creation

Enjoying this issue? Sign up to get weekly insights in your inbox: