- The importance of observation and interviews to get immersed in a problem domain, which drives the process of finding new opportunities
- Qualitative research is good for generating insights and hypotheses, and qualitative is good for testing them
- How to avoid leading questions in your interviews
- How to find problems that users may not be aware exist
- When to start thinking about quantitative feedback loops when you’re building the product
- How to filter through all the data points you receive to get to what’s really important
- Different ways of segmenting users and common mistakes
- How to involve the entire, cross-functional product team for greater alignment around the strategy and thus, better collaboration
This conversation is part of a series of interviews with experienced Product Managers on the topic of Customer Feedback. Listen and read on the site at your pace, or subscribe below to get a weekly email (for 7 weeks), containing selected interviews and highlights.
For those that may not know you, can you tell me a bit about yourself and your background in Product Management?
Sure, I started as a Software Engineer early in my career and then progressed into Product Management in approximately the year 2000. Since then I’ve been focusing on really solving the problem of product decisions that are not informed and how to inform those decisions so that companies are making smart ones, and doing that through Lean Startup methods, Design Thinking approaches and applying timeless Marketing principles as I like to say.
Great. So the topic of Customer Data and Feedback is a very big one. There are so many methods that we can use to drive insights from it, so I thought that the best way we could approach this was to maybe walk through the typical lifecycle of product development. As a first step into this, I was wondering, when you’re trying to find new opportunities and problems to solve for a given product, which are your Go-To tools to find new insights?
So I’d say it’s probably primarily observation and interviews. And so, being immersed in a domain and a potential set of problems that can happen in that domain, doing prospect interviews, just observing the real world around you, the processes that people go through in order to get jobs done and solve problems and noticing the challenges that they’re experiencing as they’re trying to get those jobs done.
A lot of times we can have initial ideas about what a product should be–an initial version version for a product. But we want to test that against–we at least want to think about–what problem we’re trying to solve with that product because sometimes is more of an idea that’s driven by a solution that we might have thought of, and so we want to make sure we are solving real problems. But then, in many cases, or at least in some cases, we may already be involved in a domain, we may already be managing a product within an industry and noticing some challenges that people in that industry are facing, that may give rise to an idea for a new product.
Yeah, that’s the thing I wanted to dive into, which is, we’re trying to make sure we’re solving real problems or maybe noticing challenges for the product domain we’re already working on… How can we be sure where the real problem is? I mean, how many subjects do you need to find out that you’re finding a real problem? How do you know that isn’t something superficial and realize you’re looking at an unserved need that domain and customer base has.
Ok, so I think here we get to the topic of qualitative vs quantitative research and methods… Meaning that on the qualitative side we’re talking about stories, anecdotes, observations that we’ve made outside of a strictly scientific context and those interviews, those observations that we make as we live our lives, as we are immersed in an industry, those are the qualitative insights that we get.
On the quantitative side… This is where there’s a tendency to think that the most important methods for determining your product decisions comes out of the quantitative side, and those are sort of numerical, sort of data-driven, much more approaching a rigorous scientific method where we might do some surveys and we might do experiments, we look at the numerical results of those experiments, we try to put in controls for those experiments, as scientists might do and that’s on the quantitative side, and both of those are very important. I personally feel that the quantitative sometimes is overrated relative to qualitative side of things, just because the rigor and the science on the quantitative side is tempting and appealing to people. But in turns out that in many cases you’re really trying to get too far to being scientific and it’s not necessarily worth the time, effort and investment. So, you still want to do those quantitative efforts for sure, but being obsessed about a perfectly rigorous scientific experiment is not necessarily a good payoff.
On the qualitative side, you end up being able to make a lot of discoveries that you might not make on the quantitative side. The quantitative side is great for testing the kinds of hypothesis and insights that you derive from the qualitative methods that you use. But the qualitative side is great for generating those insights and possible discoveries that you never would have predicted or never even thought of.
So when you’re trying to dig into… you’re talking to someone, looking to understand what their underlying problems are. What sort of questions do you ask? How can you make sure that you’re getting the right data and you’re not leading them into some sort of biased, preconceived idea that you’re coming from and how do you make sure it as observational, and not disruptive of their worldview and opinions towards a given problem?
So that’s a great question and it points to the fact that even on the qualitative side, the way you go about it, matters. You can bias things so that you’re not necessarily getting the insights that you want, you might be getting misleading information… The way you generally avoid that… There’s a few rules to follow:
First and foremost is don’t ask hypothetical questions. So if you ask a question like “Would you buy this product if i, if the price was such and such?” That sort of question is going to generally give you a very unreliable answer and a lot of people will answer it in the affirmative but in real life they just simply wouldn’t do it or do what they think they would do. And it’s not that they’re lying necessarily, or trying to humor you it’s sometimes that they really just don’t understand themselves…
Other things to avoid when you are conducting these interviews is you don’t want to ask open ended questions about what they want. You do want to ask open ended questions, but not about what they want. Because what matters… a lot of time folks are going to give you answers within a very sort of narrow view of the possibilities. As Product Managers, and the folks that we work with, the UX designers, we’re the ones who are supposed to be the experts on how to solve problems. Typically the end user isn’t the expert on how to solve the problem but on what problems they face. So you avoid asking what they want and instead ask them what they do, you ask what challenges they face along the way and what problems they’re trying to solve along the way and what jobs they’re trying to get done along the way.
So let’s see, we have the hypothetical questions to avoid, we have not asking what they want but what problem they’re trying to solve and we also…
You were talking about what they do and what they’re trying to do. There’s a very hard problem to crack here which is finding the things that they don’t realize they’re a problem, or the things they assume are just the natural way of doing things, which may be good opportunities for innovation or building a product around. I mean people don’t often realize the kind of problems they are having because they just assume that’s just the way of life. Is there a way to find out things that could be optimized, things that they don’t realize they need and maybe dig into their unrealized problems?
Sure, lead them down that path of just describing what they do on a day-to-day basis or within the context of where there might sort of a problem rich environment or business or life processes that they go through and as you lead them down just describing what their actual experiences are, some of the potential problems begin to become more apparent. Sometimes you might be taking notes along the way and it won’t be until after the interview or until you’ve interviewed several people where you sort of notice this pattern and you go “boy, that sounds like an unnecessary thing that they do in trying to accomplish their task and it seems like a waste of time for them or it seems like it’s unpleasant for them to do and that’s where you uncover these insights in many cases. Also, in many of those cases they don’t necessarily see those things as a problem themselves and it gets a little bit tricky there because you want to, it’s generally easier, to have a compelling product and to communicate the value of the product if the problem is obvious to the end user and the end customer, but in some cases it’s not necessarily obvious and you kind of have to solve the problem and show them what the experience would be like if the problem were solved. Steve Jobs was famous for doing that, there’s some Steve Jobs quote which I can’t recite off the top of my head where he talks about how customers don’t know what they want and he proceeds to say the way to address that is to show it to them; show them what their new experience could be like and then at that point they wonder why they ever tolerated the previous experience.
An example I would give is, you know, email. As archaic as email may sound as a communication medium for those of us today, imagine going back to inter-office mail, the days where you wanted to communicate with other people in a large company, you would have it in written from, you would put it in an envelope and then put it in a little internal mail slot and then that would get delivered to people in the other offices in the company. At the time, if you asked people “is this a problem?”, you know, they might not have said that it was a big problem. Because they didn’t see a better way. The idea of just typing something and magically it appearing in somebody’s inbox was to fanciful for them to even appreciate it as a potential solution to compare there existing experience against.
Yeah, huh, I guess there’s always this factor of when we see Science Fiction movies and we project our current solutions into the future and then you have films like Alien with you know, just CRT screens with the green lights and that kind of stuff. So moving on to the next step, as we’re trying to build a solution for some insight that we’ve gathered, there are also ways that we can learn from customers as we are trying to build the solutions. Which are your, once again, your go-to tools for this step of the process?
Sure, so typically what I do is continue these interviews even as the product is being developed and even after it has been developed and we’re iterating on it. But at the same time, developing the product gives you great opportunities to put real potential solutions in the hands of those prospectives customers or users and do what I mentioned just a second ago… Which is show them what a new experience could be like. You can do that in several ways…
One way is to put real software, real working software or product in front of them. Watch what they do. Watch them actually using it on a real use case as opposed to “Ooh, do you like this?”. You know, show them some sort of demo and “do you like this?”. Instead, find ways where they can actually use the product in the manner that you intend for them to use it. Observe whether they are or how they’re using it. You can set it up as sort of a test, where you expect them to do certain things… You expect it to be “easy” as measured by how long it takes them and how many “states”, how many deviations from what you expected they did, but also you get those qualitative insights when they do things that you had no idea they would do. They might click on buttons or interact with the user interface in ways that you completely did not expect and you can get some real insights from that as well. And that happens very frequently with product management and with UX… You have certain ideas about how people are going to experience your product and what they’re going to do and then you find out… And you had no idea there might be some difficulties that they need to avoid or that you need to help them avoid and you don’t discover until you’ve actually put the product in front of them.
Short of putting a real product in front of them, you can put a fake product in front of them. You can do things like give them slide decks (presentation slides) or things like that and to the extent possible make them sort of functional, maybe clickable in some way, where they’re actually navigating in some fidelity to what the intended experience would be if you fully developed a product, then you can get similar insights at a much lower cost, because you’re not having to do all of the development that you would if you were far beyond the slide deck.
Yeah, so there are studies that say that using lo-fi and hi-fi prototypes give different results. There are some people that have found out that using hi-fi prototypes means that users will cling into thinking that this is already a working product or maybe something that you’ve already invested a lot of time into and the kind of things they will say to you are different in order to please you somehow, in order to, you know, you’ve put a lot of effort in it, so… Have you had that kind of experience? Is it something that you see? You tend to prefer lo-fi or hi-fi for these reasons or for something else entirely?
I think that sort of thing is always a danger and that’s why I’m always looking to see, to sort of create situations where I’m looking at what they do, instead of, you know, getting direct feedback on “do you like this or do you not?”. So that again goes back to that concentration on things that aren’t hypothetical, things that aren’t about what they say they like or don’t like. So to the extent where you can setup an experiment where you are putting the slide deck or your wireframes, or even the hi-fidelity prototype in front of them and sort of challenge them to get a job done and then just watch them. And don’t ask them to say do they like it or not, just, watch them. You can get a lot more reliable insights that way. Some of the other potential pitfalls that you mentioned there, you know, you might not want to set those expectations that you have a finished product and you know, that those are things that can be managed just by setting expectations right, verbally. Saying “this is not the final product here. This is something that I’ve put together to better understand whether a particular solution like this that could be developed would help solve your problems and get your jobs done.”
Awesome. So, when you’re doing this kind of study and you’re trying to get this kind of insight, do you usually do this in person or do you use any kind of remote tool so you can reach out to customers that are maybe far away?
So, I’ve personally not used the remote tools and I know that some folks have and are pleased with using the kind of User Testing tools, so I can’t really speak to how well those work from direct personal experience but I know that some folks have gotten some good insights. Now I would say that when you’re able to do things in person, you tend to get some of the body language that helps you maybe understand a little bit more when a process is painful, when the usage of a product is painful. You might be able to get some of that same insight from doing a video call or something like that.
You mentioned that quantitative tools and that kind of data is useful for validating hypotheses and maybe trying to proof or disproof something that you think is valuable or that you think it’s a solution to something. When in this process would you fit that in? Is it after you’ve already done Engineering work and it’s already shipped, is it somewhat prior to that?
Sure, good question. I won’t say all throughout, but I would say that only after you’ve done some initial qualitative work, once you’ve got the product built to some extent, then you can start embedding usage analytics into the product. There are tools like Mixpanel, Kissmetrics and I believe even Intercom might help with this, where you can start measuring what people are doing in your product without having to sit in front of them and watch them do it. Then you can start seeing, you can run some experiments there where you are predicting what they’re going to do, how often they’re going to do it. And that can help you get some insights when they’re not confirming to your expectations. With Mixpanel you’re able to setup what are called “event funnels”, so if you have a use case for example, where you expect somebody to basically go through a sequence of steps to achieve a goal, as a use case sort of is, you can instrument the product to track that sequence of events and see where a sort of a drop-off is, because people will start on the first event, some of them will make it to the second event, some of them will make it to the third event and there’s typically a drop-off as you proceed through that sequence… and some people never make it to the end, and get their payoff and job done. And that’s really important, that gives you some insights into how you might improve or at least the steps you might want to make easier or obvious.
But you don’t actually have to build the product and instrument it with these usage analytics in order to begin deriving some quantitative insights. If you look at Lean startup methods, they’ll do things like put a value proposition on a landing page or some other channel. So you might have social media channels; you might have Facebook promoted posts (and ones that aren’t even promoted); you might post things to Twitter or the other social media channels with your value proposition, which will generally be something that kind of hints at the problem that you think needs to be solved. So if you’ve gotten some of these qualitative insights about what you think is an urgently pervasive problem that needs to be solved, how about finding or publishing some resources online that solve that problem? Help users solve that problem for themselves? And then post to social media channels “Hey, if you’re experiencing this problem, here’s a resource that may help you” and make predictions in advance about how many people are going to click that link. The fun thing again, is it doesn’t have to be your own resource; it may be something that you found online that somebody else published, but it still serves the purpose, because you’re driving people who find value in solving a problem, and that can help you test your hypothesis about whether that you think is urgent and pervasive is as urgent and pervasive as you think.
Ok, so now you’ve built the product, you got to something and shipped a first version, and then you get to find this instrumentation used on a wide scale and you also get customer support information and maybe feature requests that are related to the thing that you shipped and some ideas; all of these things are coming at you and many of them are superficial and once again, people are not supposed to the solution or the best solution for them, but everything is coming in after you’ve put out something already. How do you sift through all of these things, how do you try to find meaning and value from all this possibly noisy information?
Sure, so it’s hard, and there are a couple of ways to kind of filter through all of that and kind of simplifying this.
First of all, you’re always asking why. You’re getting customer feedback that says “I can’t…”, there’s a bug in the product, or you’re maybe getting questions from customers and all those sorts of things. You always want to know what they’re trying to accomplish in the context of experiencing those bugs and asking those questions. So there’s always digging into what are you trying to accomplish and why are you trying to accomplish it. There’s a filter that you always need to be applying ultimately though, and that is your Unique Value Proposition.
So you’ve identified some sets of problems that you’re intending to solve and you’ve generally used those to sort of drive a unique value proposition and say the “value that we provide is X” and X solves that set of problems. You always need to be making your product decisions in the context of that unique value proposition. Now there are time where you may need to pivot, needing you need to modify your unique value proposition and basically change the vision for your product, but short of that, if anything, any feature request is coming your way, any bug report is coming your way, and you’re going through those and finding out what they’re trying to accomplish… if what they’re trying to accomplish is not consistent with the unique value proposition, it may sow the seeds for a great new product idea but it’s probably not one that you should be taking on. So, that’s first and foremost the filter to apply as you’re getting bombarded with all of these ideas of how you can improve or modify your product. “Is it consistent, does it support the unique value proposition?” And that’s on a scale, right? Some things support it more than others. So that’s your number one prioritization technique.
Now, other than that, you want to be quantifying how many potential customers and users actually would benefit with whatever improvement to the product you are pondering. You don’t necessarily do that by looking at the volume of requests or the volume of customer service complaints. That can be a guide, and might sort of give you some hypothesis. “Boy, this problem is much more worth solving than this other one.” But that’s probably something that you might want to test, again, using qualitative methods, by going out there and interviewing folks about it. But also by maybe running additional quantitative experiments where you’re able to see from those experiments and not just from the people that self-selected and chose to report the problems, how many of those there are out there.
I guess probably a very difficult question to ask ourselves is if we find out that there’s this problem that is pervasive to our customer base, it might not fit into our current value proposition. Should we reevaluate our value proposition based on that or is it something like “no, we have this vision and we’re sure that what we’ve already got is enough, and we’re consciously leaving that part of the problem out”. So, this is probably more a strategic kind of thing but, what kind of insight can you share about this very difficult choice and when to do it, when not to do it, when does it make sense?
Sure, I guess you can look at it opportunistically or you could look at it in terms of falsifying your existing hypothesis. Your unique value proposition is a hypothesis and it’s centered around, you know, various assumptions, including the pervasiveness of the problem. Just because another problem is more pervasive doesn’t necessarily mean that has falsified your hypothesis about your problem also being pervasive.
If you look at it opportunistically you might jump over and think “Wow, this other problem is much more pervasive and much more urgent for the customer, so maybe I’ll pivot.” And that’s a possibility. You may just decide that the resources and your investments are better suited to that other opportunity. Or, you may decide that there’s another product idea and not necessarily mutually exclusive. But I would just caution not to necessarily assume that your initial value proposition hypothesis is invalid just because there are unsolved problems that are coming to light that aren’t consistent or aren’t in the realm of your value proposition. That just means that there are other opportunities available.
So, the short answer is, if your value proposition has been invalidated, you need to pivot. If you’re noticing there’s value that you can deliver outside of your value proposition, then you need to look at in terms of your ROI and if you want to pursue two ideas at once or just pivot and pursuit that other opportunity.
Now switching gears a bit and going back to something a little bit perhaps more tactical… We have all this data… a bunch of interviews, a bunch metrics, customer requests, emails, just a lot of inputs. Do you have a system or process to organize and search through all of this? Is this something that you do when you’re looking specifically for something or do you go back for old information that you may have and try to search through all these things?
I don’t know if I have a systematic approach to this. On interviews, those notes get captured and they get shared with the team, typically I’ll extract the insights at the bottom of the interview notes… “these are the major insights that I think I got from this interview”. The Lean Canvas, and I’ll just explain what it is really quickly, is a one page depiction or documentation for your major strategy hypothesis for your product. So they include the unique value proposition, the main problem that you think you’ll solve, the solution, your channel for reaching customers, what your customer segments are, what your revenue and costs are and existing alternatives that users have for trying to solve those problems that you think you can help them solve.
A lot of the insights that you derive from interviews and from quantitative methods as well can be captured in the Lean Canvas, as it is a living document. So as you continue to run experiments, to notice the usage of your product and continue to interview customers, you’re making adjustments to the Lean Canvas. And I’d say it’s not necessarily a systematic process in terms of being able to take disparate sources of information, having a way of combining all of those in some systematic way… I’d love to hear of somebody who’s doing that, but I haven’t done it. I’m not sure you want to get too mechanical in doing that sort of thing anyway, but these types of activities–the experimentation, the usage analytics, the interviews and the observation–are things that should be happening almost constantly. And getting that customer feedback from customer support and things like that, those are things that are happening in an ongoing fashion. And you should build that into your weekly activities, making sure that you are not ignoring different sources of information.
Now, for something that usually affects a lot the sort of insights that you get from customer feedback and data is the topic of user and customer segmentation. So depending on the lifecycle point that they’re at, or maybe the kind of relationship they’ve had with the product in some way, means that the kind of feedback that someone might give you is more relevant or not to the problem you’re trying to figure out. Which techniques and criteria do you use to segment your users and actually know what you’re looking at when things come your way?
So I’ll give an example… there was a company that I used to work for that was in the automobile marketplace business. We sold web-based automobile marketplaces to credit unions so that they could offer it as a service to their customers who were buying and selling vehicles and wanted to do so conveniently online. The interviews and observations that we conducted (mostly interviews), focused on people who had recently bought or sold vehicles, or had attempted to buy or sell vehicles. And they might have done so by going to dealerships in person or they might have done so, online. And so by interviewing a couple of dozen of these potential users, we were able to start segmenting.
Whenever you’re looking to problems that you might want to solve, those are usually associated with certain types of users. Users that either follow to either a role, like a job title or something like that, or in this case, it was more psychographic. Psychographic being opposed to Demographic. So Demographics are about things like age, gender and stuff like that. Psychographics are about more how people are psychologically, how they operate, what their preferences are.
We were able to find that there were actually two primary personas on the user side, and we called them Jamyn and Lisa. A Persona is sort of the user profile, it’s a set of characteristics of a user, and you try to turn that profile into sort of an actual person that represents all the people that might share those attributes. You start to give them really specific characteristics, like her name is “Lisa, she has blonde hair, she’s 34 years of age” or whatever, to kind of humanize them.
So to get back to Jamyn and Lisa… Jamyn, we found, loved to do research, and all of the different challenges in buying himself a vehicle, Jamyn overcame them by doing research. In some cases, taking extensive time to do that research to figure out what he had to do. In the context of buying vehicles, one of the chief problems that still exists is sometimes in the title transfer process, depending on which state you live in the US, anyways. Jamyn, was quite willing to go through all of that time and effort to figure out how to do the title transfer and complete every step of the transaction.
Lisa, on the other did not want to mess with any of that. Lisa had no patience for the idea of researching online how to do a title a transfer–that was just not a useful expenditure of her time–and in many cases, she ended up going to the dealer, and making her car transactions in person, with a dealer who knows, who can guide her to this entire process. “Do you need insurance?”, “Sign this sequence of paperwork in order to complete the transaction”, and all of those things. She knew she wasn’t going to get a very good deal in the process by going to the dealer this way, but she was willing to pay the extra expense in order not to have to go through all that research that Jamyn was quite willing to go through.
So what we say was an opportunity with Lisa, who was experiencing this challenge with difficulty navigating the buying or selling process, and paying a lot to be able to bypass those challenges. We saw there was an opportunity–”we might be able to give her an online solution that guides her through that entire process”. So, we were able to see that with the segmentation, which was basically around which problem she faced vs Jamyn. And we made a very conscious choice of “we’re gonna target Lisa” and “we’re gonna target the problem that she faces”, which is navigating the buying and selling process. So when you’re making a choice as to prioritizing one possible new feature on the product vs another one, often the value proposition and the set of problems that you’ve chosen to solve, coincide with the segmentation that you’ve done of the different types of users.
And actually, I would add to that sometimes one of the biggest mistakes with segmentation is to NOT look at it in terms of psychography or psychographics, and to look at it just in terms of job title and in terms of demographics. The psychographics tell you: these are the people that experience the problem that we’re trying to solve. Demographics won’t always tell you that. So one of the things that you can do though, is that you can start to correlate the psychographics to the demographics. In other words, “I’ve identified that Lisa who faces this set of problems and has these characteristics” is my target audience, but now let’s see who she is in terms of age. Is she really a woman?, or maybe it doesn’t make it any difference whether she’s a male or a female. And you correlate to the demographics characteristics and then that helps you target from that point on, maybe demographic audiences through your channels. So, on Facebook you define your Custom Audience or Target Audiences for your promoted posts in terms of the set of demographic characteristics that you know correlate back to the psychographic characteristics.
So, another thing I see a lot of PM’s complaining about and having problems with is the issue of working with other teams that are facing the customer. So, either Customer Support or Sales, where the interface with the customer and bring back pieces of data, either through some sort of filtered view like: “customers are really complaining about this issue X” or “I need to close this deal and I want feature Z”, and “that’s what the customer said to me”. And that’s what you get as a Product Manager, and that’s one side of it. But there’s also the side of how do you inform those teams of what’s going on, so they’re able to actually manage expectations and they’re able to really interface with customers in a way that is aligned with your goals and strategy. So, I wanted to ask you a bit of your insight into proper or better ways to interface with these teams and how to create processes that are healthy and that bring about the sort of environment that we want to work on.
Sure. So I think one of the keys to having a successful prioritization effort for your product is gaining some level of consensus for the product strategy. And the only way you can really do that is to involve everybody on the product team. When I say “product team” I don’t mean developers or just developers, I mean everybody who makes decisions related to the product. That’s on the marketing side, that’s on the customer support and that’s on the sales side as well. So you need to be involving those folks early in the development of the product strategy. That means, in very specific examples, going through some of your product strategy exercises, like building the Lean Canvas, collaboratively. I also, to come up with the unique value proposition, I use sort of a tool that I developed called Competitive Mindshare Map. And with Competitive Mindshare Maps you lay out the competition, and competitive products, and you show what their strengths are and then you put your product in the mix, or your product-to-be – sometimes – in the mix, and you show what your strength is. The territory in the mind of the potential customer that you can occupy and hold relative to the competition while ceding the other territory to the competition. That’s the sort of exercise where you want to involve all sales people, developers, and anybody on the product team that wants to participate and have this context of how did we come with this product vision and this product strategy.
When you do that, that helps you going forward, because when you’re having that conversation with the salesperson about that next big deal or they’re saying that a lot of people are experiencing or wonder why a particular feature isn’t on the product, you can guide that conversation by going back to the product strategy. Is this something that relates to our unique value proposition, or not? If it’s not, you can have that conversation whether you need to pivot, but that becomes the basis of the conversation, not “you need to implement this because this next big deal.” It’s “wait a second, do we still buy into the product strategy or has that changed? Or does that need to change?” That becomes the new conversation as opposed to whether we need to implement this particular feature.
That helps rally everyone behind the things that need to be done on the tactical side to achieve the value proposition. You won’t always have consensus. Sometimes people are just gonna say that you actually should change the unique value proposition–and that’s a good conversation to have–but it’s a different conversation. It shifts the focus away from “should we implement this particular feature or not.”
So our time is running out and to wrap it up, I’d like to ask you what would your top recommendations be for Product Managers that are trying to use and make sense of customer data and feedback to make decisions. What would you recommend PMs to think about and try to frame their activities and daily processes around?
Perfect. So, first, unique value proposition. Just make sure that you have come up with an informed unique value proposition and that every decision that you make, you match against that unique value proposition and use it to guide your decisions.
Second, when you’re looking at feature ideas and things like that, go back to the use case, go back to the jobs-to-be-done. So it’s not “do you want to implement this feature?”, it’s “is this feature, number one, consistent with the unique value proposition, but what job is somebody trying to get done where they would incorporate this feature?” “What is the importance of getting that job done?” What is the importance of that use case? How do we need to modify the use case in order to optimize how that person achieves the goal of that use case. So taking it back from features more over to use cases, that are used to solve a problem in order to get a job done.
Awesome. Well, it was perfect. Really insightful interview, and I wanted to thank you for your time…
Well, thank you for this opportunity, to share my thoughts. It’s a very interesting topic, and a very important topic. It seems like something that should have been figured out a long time ago, but people have not figured it out.