Key Best Practices for Using Customer Feedback

Key best practices for using customer feedback, drawn from conversations with 14 product leaders


As Product Managers, we perfectly understand the need to generate and use customer feedback. What isn’t so often clear is how to do this on a day-to-day basis, when we’re not as experienced or when we deal with “less than ideal” products and organizations.

This led me to reach out to 14 leading Product Managers and talk with them about how they use customer feedback in their own companies and teams. You can find the full audio recordings, along with transcripts and highlights in this resource. There’s a ton of useful information throughout those conversations. In this post, I wanted to share with you some of the key takeaways I got from them.

1. Feedback is only relevant vs. a goal and user context

Understand where it’s coming from

A piece of feedback usually comes to us in the form of “users are asking for X” or “customer Y and Z are telling us this”. By itself, that’s absolutely meaningless. The first step to figure out if something is relevant or not, is to know where it’s coming from — and since we’re dealing with products and markets, this isn’t about knowing which specific users are giving the feedback, but about which segments they belong to. That will provide the necessary context for us to understand the motivation and problem they might be facing.

Just like a cake, there are many ways in which we can slice our customer and user base, and there isn’t one true way to do it. It all depends on what we need to do, the stage and type of the product. Different Product Managers opt to think of segmentation along the dimensions that are most effective for their particular goals. In particular, they most often group their users and customers along their:

Characteristics and Behaviors

Traditional market segmentation is typically done around observable characteristics and behaviors for customers and prospects. First, we have demographics — statistical characteristics of the population, such as age, gender, income, etc. Then, there’s also psychographics — which classifies people according to their attitudes, aspirations, and other psychological criteria. However, these kinds of segmentation are mostly useful for Marketing purposes, but not so much for PMs.

Finally, and although there are many potential issues around how these are defined and used, roles and personas are a staple of many teams’ workflows for designing new features, and thus are also frequently used to think about different segments of the user base.


Frameworks like Jobs-to-be-Done are extremely helpful in determining exactly what the product is supposed to be doing for its customers — that is, the needs it serves. The same product may be used by people with quite different needs and under a wide range of contexts. This means that a product’s suitability will not just depend on the person and her characteristics, it will actually depend on the product’s usage context and the goals for the task at hand.

By segmenting our user base in terms of the jobs they’re looking to get done, and not just their role or descriptive characteristics, we’ll have an essential piece of context that provides much more clarity in how to seek and interpret the feedback we get from them. A classic example illustrating this point is that customers don’t actually need an 1/8-inch drill bit; what they need is an 1/8 inch hole in their wall.

Relationship with the product (over time and over value)

Another way around which to segment customers is how they relate to the product over multiple dimensions—most commonly: their usage level, how long they’ve been users, the value they get from it and what they pay (or have paid so far) for it. These dimensions are cross-cutting (and complementary) to other types of segmentation and can be very useful in understanding why people in what should be the “same group” are giving different answers.

Let’s go a bit more into each dimension and the sort of questions they answer:

  • Usage — Each role or needs-based segment will have some assumptions about the features that will be used and how frequently we expect them to be used. If the data shows different feature-use and frequency clusters, we can go into a lot of interesting questions with those specific users — Why are they using it more/less than expected? Are our assumptions about needs or role-based segmentation wrong? Are they getting what they need from the product?
  • Longevity — Where the customer is in his relationship with the product is very important to classify unsolicited feedback and knowing which kinds of questions to ask them. With new customers, we’re looking for product fit, usability feedback, indications of continued use in the future, motivations behind the purchase/usage decision. With older customers, we’re typically interested in satisfaction, power-user and early-testing feedback and pain points that the product doesn’t solve.
  • Perceived value — A set of customers can have the same underlying need and motivation to use the product, but the value they get from it is different. Their particular pain points might be the same, but the intensity isn’t homogenous. We’re looking to have a clear view of “What is the customer getting out of the product?” and “How important is that problem for them?”. By understanding where they fit within this gradient, we can get much more insight into their feedback.
  • Invested value — The amount of money customers have spent on the product, relative to other customers is also telling of the kind of relationship they have with it and a proxy for their satisfaction, perceived value and importance. This of course varies widely and depends on each product’s characteristics; however, it is an easy metric to use as guideline.

Uses and definition of different kinds of segmentation

You need to know where you’re headed

Yet, having a good segmentation model and being aware of where the piece of feedback is coming from (and the context and motivation behind it) is not enough.

The only way to have a clear answer to: “is this bit of feedback relevant?” is by considering both the user context and our current product and business goals. If our current goal is expand our MRR by up-selling to customers on paid plan A to plan A+, then feedback from users on the Free plan will not be as relevant. It might be, if we were looking to increase retention or improve satisfaction, for instance.

It’s a two-level processing system:

  1. Do we know “Who” is giving us this feedback and why?
  2. Is this something that we want to focus on right now?

If the answer to the latter is No, then you can safely move on to whatever else might move your needle — there are never enough resources, so you might as well focus on what matters to your goals. When the answer is Yes, you can proceed with a clearer definition of how to evaluate success.

(…) While we get this continual stream of feedback, it’s very easy to see whether that’s relevant to what we’re working on right now. If it is, then we sometimes interleave that immediately, or if we think it coincides with what we think we should be doing anyway and it positively reinforces all that, or it’s something we completely missed and forgot about and it’s a no-brainer, then that’s really easy. The other stuff, the stuff that we’re not planning on working on, doesn’t match that broader strategy… We are aware of it. This is the back of our minds, but we don’t act upon that at all.Tom Randle (Geckoboard)

2. Getting quality feedback is a cross-functional effort

Insert yourself into customer touch-points

Organizational silos exist for many reasons, but they particularly affect Product Managers because they are the engines of the cross-functional process that defines and ships products. So, it’s on us to break those barriers down.

One way to do it, is to find ways to help other customer-facing teams (like Sales and Support) do their job. Meet with them, be available for questions, go through their concerns and explain future plans or workarounds. This effort to reach out will signal other teams that they should come to you with customer issues or questions of their own.

A further step into this is to actually be part of those teams sometimes. Join the support team and answer a few tickets yourself. Ask them to send you summaries of top problems every week or so. Go on sales calls and listen. Understand what customers ask for and object to (and what salespeople are telling them). Later, you can debrief them about your plans and how to focus their message so they sell what you have or what you’re sure you’ll have (and not some random feature idea).

You’ve got to actually show up on the sales floor, or in the support departments, and just get to know the people and find out what’s going on, and help them out now and then with challenges that they’re having, because you’re the product guy who wanders through the department, you will get questions, and if you’re too busy to answer people’s questions or set up a meeting with me, then you’re not establishing a good relationship: a good, collaborative, casual relationship. If you’re like, “Sure, I’ve got five minutes. Let’s talk about that,” or if salespeople are asking, “How do we handle this competitor or that competitor,” go off and volunteer to do a little research and get back to them that afternoon with a little bit tidbit: “This is how I might position to get that competitor. They seem to have this weakness or that weakness, compared to us.” Be willing to engage in these informal conversations. — Bruce McCarthy (Product Culture)

Get everyone to think like a PM

Since you can’t always be there for other teams, coach them so they give you data that’s closer to what you need. Help them think like a PM, so that in their interactions with customers they dig further to understand the problem, and they don’t come to you with solutions.

Feedback is most valued (and valuable) when shared

If your organization doesn’t see the value in customer feedback, find a way to get some, and show it to people in leadership positions. It’s amazing the impact in empathy and understanding that comes from this.

Also, if you set up a regular check-in with your cross-functional team to gather and share what you (and they) are learning from customers, you’ll be aligning and empowering everyone to understand the problems you’re facing, and contribute to the design of solutions.

We get everyone together and we get a little report from each team, and again, all this is done with the view of the strategy that we set. Which we communicate regularly to team. But it’s basically talking to each other, we just make sure that there’s always that open communication. We never want to get into the situation where sales are off doing one thing, customer success is doing another, and product’s trying to pull it all together. So we’ve been really conscious from the very beginning of making sure that we communicate really coherently through access to the central area with the releases and roadmap, and through regular meetings as well. — Hannah Chaplin (Pendo)

3. Think of customer feedback as a system

When talking about customer feedback, people usually think about a particular type of tool — it might be surveys, user tests, feature requests or others. But in reality, it’s a system of tools and techniques, combined.

The goal of customer feedback is to understand whether we’re hitting our and customers’ goals or not. It’s about getting customer input throughout the development cycle and getting a more complete picture of their needs.

Visually, you can think of it as poking holes through a curtain, trying to see what’s behind it. Each tool lets you uncover different areas, and that is why you need to use many of them.

Customer feedback is like poking holes through a curtain

The most immediate way to think about customer feedback methods is in terms of the type of data they produce—they can be either quantitative or qualitative.

Another way to think about them is how the feedback is triggered—are we “actively” probing for something or are we “passively” listening and monitoring what comes in? Combining these two dimensions produces a matrix like the one below.

Matrix of feedback methods: Passive/Active vs Quantitative/Qualitative

However, a much more interesting way of looking at this is to consider the purpose of the feedback method. In other words: what does it help us solve? Using that perspective, we can divide feedback methods into four major categories:

  1. Understanding — methods that let us understand what customers need, find valuable, and the reasons why things work or don’t work for them.
  2. Testing — methods that help us test and validate if a concrete idea, feature or value proposition matches our expectations or not.
  3. Monitoring — methods that work as “thermometers” to track over time if some feature, release and the product in general are truly matching our expectations or not.
  4. Listening — open feedback channels for customers to reach us for support, questions, requests, or general feedback.

Let’s have a look at how these groupings line up with commonly used feedback methods:

Groupings of feedback methods

Reviewing these categories we can see how they correspond with the product development cycle and also how they mostly match the Quantitative/Qualitative and Passive/Active matrix.

This classification shows the value that comes from every kind of customer feedback and provides for a structured approach when it comes to putting these methods together.

An order to follow when using different methods

Something is better than nothing

The good news is that we don’t need to do everything to get valuable insights — at least not right away. We usually fret about whether or not we’re doing the right things and whether we’re doing them right. This is a great instinct to have as PMs — we should be thinking about how to improve our processes and work. On the other hand, this can also lead to inaction (“what if I send this survey to the wrong audience?”, “what if I’m not asking the right questions?”, “which tool should I use for this?”, and so on).

As long as we’re aware that whatever process we follow isn’t going to be perfect, then it’s much better to do something, than nothing at all. It’s very likely that we’ll be working off of imperfect inputs, but being conscious of it is key: this way, we’ll have something to question, research further, and test. At least we’re starting from something that came from our customers, rather than our own heads.

(…) product management advice needs to be put into context. So I’ll give an example here, like Net Promoter Score. It’s now being ripped on a lot as not being really the one score you need, it’s not great, it’s got a lot of problems, it wasn’t really based on valid research. All of those are probably true (…). But I think we need to recognize what it did for a lot of companies. We had a lot of companies that literally took no metrics, never asked customers anything, and for the first time actually did something. They asked, “Would you recommend this?” Now is that a great question, does it really make any sense? Maybe, maybe not. But the point is that companies finally did that for the first time. So I think when we talk about prioritizing customer feedback, let’s step back for a moment and look at where the company is. Because if you’re not doing anything, do something…Nis Frome (Feedback Loop)