Conversation with Nis Frome

Conversation on customer feedback with Nis Frome, Cofounder & VP of Product at Feedback Loop

Published

Key Points

  • It’s more critical to get any kind of feedback (even if not perfect) than doing nothing
  • It’s OK to iteratively improve your research and feedback process — it doesn’t need to be perfect
  • Having the research for an idea/problem be performed by a separate person from the one that had/found it, is a good way to minimize biases
  • Using “proxy customers” for research and testing concepts in the B2B space
  • How to think about the type of experiment you need to do, depending on the stage where you’re at
  • How to prioritize among a group of validated opportunities you’ve identified
  • Combining qualitative and quantitative inputs to have a greater understanding of the value in a feature or idea
  • How to try to tease out the noise from all the inputs you get
  • Different ways of segmenting your users, depending on the type of solution your product offers

Transcript

Daniel:

Nis, For those that may not know you, can you tell me a little bit about yourself and your background in product management?

Nis:

Sure, yeah. So interestingly, I started my career in digital marketing consulting and actually got into product out of necessity because I wanted to build solutions for digital marketers. So really, my background’s in marketing and building product at my last company didn’t work out so well, but currently working on Alpha UX. And so this background has really shaped a lot of the way I think about product. I think about it from a marketing… From a content marketing was really kind of my sweet spot. And I think about the intersection of content and product a lot. It’s something I’ve read about, something I think of… That’ll shape the way I answer a lot of the questions is the intersection of the two between content and product.

Daniel:

Awesome. So can you tell me a bit more about Alpha UX and your role there?

Nis:

Sure, yeah. So Alpha UX is actually a solution for Fortune 500 product teams, enabling them to generate user insights on demand. It’s a real struggle, I think, in a large organization. It can be very frustrating when you have the resources to do something and for whatever reason, those resources don’t align in a way where it actually enables you to do what you know is the best practice. So what we’ve looked at, and again, that’s this intersection of content product. What is the best practice? How should you be generating user insights? How should you be using them? And then how can we enable that with the product? So we happen to have a solution that combines the capabilities that an enterprise with resources needs to be able to generate user insights.

Daniel:

Right. So yeah, so that’s… That’s a perfect fit for our conversation, because I guess that a lot of people feel a bit lost around all these kind of things they can use to gather insights and how they can process them and make sense out of all of this. So I’m maybe starting out through the basics and as you’re trying to find out what’s important to your users and customers, what kind of questions do you ask and what are you looking for?

Nis:

Yeah, no, that’s a really good question. I think everything I say, just generally like product management advice needs to be put into the context. So I’ll give an example here, like Net Promoter Score. It’s now being ripped on a lot as not being really the one score you need, it’s not great, it’s got a lot of problems, it wasn’t really based on valid research. All of those are probably true. All those statements, but I think we need to recognize what it did for a lot of companies. We had a lot of companies that literally took no metrics, never asked customers anything, and for the first time actually did something. They asked, “Would you recommend this?” Now is that a great question, does it really make any sense? Maybe, maybe not. But the point is that companies finally did that for the first time. So I think when we talk about prioritizing customer feedback, let’s step back for a moment and look at where the company is. Because if you’re not doing anything, do something…

😀

Get out of the building, fine. And if all you’re doing is getting out of the building once a month, okay, now we need… What’s a more robust process look like? So in terms of starting, one of the great things that we’ve done and that I really recommend, just a happy hour. Just a happy hour where we talk… I’ll do a happy hour with product managers, I’ll also do a happy hour for marketers. I’ll do happy hours in a lot of industries that I’m interested in. But it’s a really great way to learn the challenges that people are going through and understanding where are the challenges, what solutions they’ve recently come across that they’re interested in, and you’ll… I find you’ll get a really honest perspective on things, but obviously you also need to see people as they work. And getting that sort of feedback and that sort of insight can be more challenging and that’s where doing prototypes and running experiments is definitely needed, and you can’t only sit in that early stage customer discovery. And then in terms of how we think about user feedback, which is interesting, I don’t think we’re unique in this regard. But the way we think about it is, what is the end-to-end use case? If you ask someone for feedback, they’ll always give you feedback. People love talking.

😀

And people love ranting, and they’ll always rant. And that’s fine, but we always think of what’s the feature that enables you to get the promotion, that enables you to build the product, that enables you to build the product differently, that enables you… Whatever your customer is. What is… Let’s separate the feature from what actually they are now able to do. And if the feedback they’re giving doesn’t actually enable them to do anything else, it maybe just makes how they do something easier. That’s important to know that this will increase the efficiency, but I care more about the end result. The objective you can now reach that you couldn’t reach before. The objective you can now reach that is better or something like that. That’s the way we try to frame and we try to prioritize around that.

Daniel:

Awesome. So you touched on a few important issues there. I wanna separate the things around prototypes and interviews. As you’re doing interviews, it’s very easy for you to maybe lead a user down some preconceived idea that you already have and the way that you put questions forward leads them on. Are there any recommendations or any sort of way that you try to go about this to avoid this problem?

Nis:

Yeah, so there’s actually… I have a few thoughts here and one of the thoughts I’ll save for a little later when you talk about cross-functional teams and getting access to users in the first place comes up here. But yeah, so there’s something interesting. Not trying to plug Alpha UX, but a lot of companies said to us, “User insights seems like it should be a competitive advantage that our company possesses. Why would we outsource that to you?” And we didn’t really have a great answer to that until on our podcast, “This is Product Management,” we interviewed Jeremiah Zinn, who’s the head of product at Bark & Co. Really smart guy. When he talks you understand there’s insight coming out of his mouth and he said something really interesting, was that they created an incubation lab at Bark & Co where anyone could kinda pitch their ideas or try to run with their ideas, but one of the things he said was that the experimentation was separated from the champion of the idea, right? There was this concept, there’s like an independent group that will experiment with it and then there’s someone who owns it and those are two different people. I mean, that was a really good answer for us, he said the reason that they do that is so that you remove this bias, like it’s difficult to have a horse in the race if you don’t have a horse. Right?

Daniel:

Yeah.

Nis:

So that was really interesting and that’s one of the things is, whoever has the idea, separate that person from the person running the experiment. So the whole capability of running the experiment is separate from the person who has the idea. Now you still run into the issue with when you get the experiment results who’s in charge of evaluating them, and that I think you always have these challenges and you need to find a robust process, but separating the capability from the owner is a really good way to do that and not leave biases. Obviously the researcher may still have their own biases but they’re not as strongly necessarily in favor of the feature or product, they don’t really have a stake, they don’t really care necessarily.

Daniel:

Yeah, that’s really interesting. I guess you’re trying to work on a prototype and you’re trying to design what’s the experiment going to be like. It’s very difficult to avoid these kind of traps and the fact that you’re separating can help you out on that. But can you walk me through a little bit how you try to design experiments and how you might try to get some insights at this early stage? How does an experiment work, what kind of things are you trying to get at this stage? Is it quantitative? Is it qualitative? What kind of things are you trying to get out of it?

Nis:

Sure, so I’ll talk about what we do which is slightly different that what we offer as a solution.

Daniel:

Sure.

Nis:

It’s a separate discussion. So what we do, so interestingly when you’re in the B2B space, experimentation is very nuanced, there’s a lot of nuances. And one of the major challenges that I guess doesn’t, it gets overlooked a lot is that when you show a user who’s potentially paying you hundreds of thousands or millions of dollars a year for access to your solution. When you’re showing them something and they like it but no one else does, you can’t just say, “Well, no one else wanted that.” That experiment has business ramifications, right? So I was gonna save this for later in talking about cross-functional teams, but I’ll bring this up now. This concept that we use, which is what we call a proxy customer, which is we’ll actually set up experiments with people who are not our customers but who match the demographic or the persona for whatever reason that they’re not our customers, and there could be a lot of reasons for that in the enterprise field, especially if you can test to an end user whose organization may not buy a solution but who as an end user might buy a solution.

So this is something we do, so when we experiment, a lot of our experiments are not to the exact customer but to a proxy customer, at least for new features and new products. If we’re testing some granular user experience, then we’ll test it live on users. But if not, if it’s, “Should we build this feature or not?” We’ll go to to proxy customers. And with that regard we have a lot more freedom because, not that I don’t care to tick them off, but I don’t, it doesn’t really hurt our business to test anything in front of them. Right, so that’s where we have free range and we’ll do qualitative and we’ll do quantitative. Again, and I’ll keep coming back to this, we think about what is the end use case that it enables? What is, what could this person now do with this new feature set? Or if they love the feature set but it doesn’t really allow them to do anything it doesn’t really matter to me, because if it enables them to do something that ties back to revenue, that ties back to monetary value and if it doesn’t enable them to do anything, then I don’t wanna be fighting based… I don’t wanna compete based on preferences of design or this or that, there needs to be a use case there, a budget, a line item that goes toward what this is enabling.

Daniel:

Yeah, I guess experiments can be great to validate an idea for a feature or something that you already have some sort of structure around. What kind of things do you use to try and find unmet needs and try and find those end-to-end use cases that you’re maybe not even thinking about?

Nis:

Sure, yes, so in terms of the experiment and to follow up on the last question you had with how we actually run one, there’s really two types of experiments. You know, you can design prototypes and software and get feedback on it, that’s fine that’s I think a typical iteration and that’s something we do a lot so how we do that, we’ll put, we’ll wireframe we’ll get some feedback on the wireframe, we’ll put together prototype designs. We’ll make them interactive through InVision, we’ll send people to them, we’ll have them fill out a survey at the end. We’ll do an in-person interview and as they use the prototypes, we’ll mix qualitative, quantitative. What’s interesting, though, is what we call, well, we don’t really call it anything, but you could call it a service as a software, and how that works is actually we enable the solution without any software.

So let’s say… In our case it’s pretty particular, let’s say like Uber, for example, Uber wanted to see if anyone wanted, let’s say before they were coming out with UberPool, right. What they could have done was they could have set it up totally manually like, “We’ll test in a new city 100 people, we’ll put out some ad and say ‘text this number and it’ll set up a car pool’ or whatever it is and it’ll be this much money.” And they could test that, see if people actually use it, enable it, iterate on it and then build the software on it. So that’s actually one of the primary ways we do experiments is we try to enable the end case manually, enable the product manager to do something. Often the way we do that, by the way, is with content. We’ll write an article on how to do split testing or something like that in-house on prototypes and, “You need this, you need this, use these six tools, spend this much money, it should take this much time.” They’ll do it and inevitably our reader will come back and say, “That was great, but it took me three weeks and $8,000. How do I do it for cheaper and faster?” It’s like, ah, we now have a business.

Daniel:

Sure. 😀

Nis:

That’s one of the ways we think about validating a feature set.

Daniel:

I guess one of the tricky parts, and I see a lot of people with this kind of issue is, “Okay, I think I found a valuable opportunity and I think I found one or a few customers that need this, but how can I make sure that it applies to a broader customer base or even to prospects?” Is it something you’re able to do? And when you’re not, what can you do? Any thoughts on this?

Nis:

Yeah. It’s definitely… I don’t know of a silver bullet here. It’s a huge problem, I think, is gaining that early traction and great validation and then realizing it’s just not a scalable opportunity. We’ve discovered that in certain elements that some news case we enable might help an end user but it doesn’t necessarily help the organization and if you wanna sell to the organization, you’re running kind of two different problems there. When you’re an early stage company, I think the reality is that it’s the CEO’s decision to make what they decide, how to make those decisions and it just takes an… There’s an element of vision, there’s an element of chance, there’s an element of luck. I don’t know of a great way to validate that some initial opportunity is definitely true across an industry or it has a sizable market. That’s a business decision and not purely a product management decision. It doesn’t mean there aren’t things you can do to test it, it’s just they’re not necessarily easy or quick.

Daniel:

That’s a tough one. 😀 Say you’re at point where you have a set of validated opportunities and for whatever reason these are the things that are moving forward, how can you prioritize among them? How can you tie that back into the product strategy and how can you compare them because sometimes they’re often targeting very different kinds of objectives and they’re not really comparable?

Nis:

Yeah I think you have a dual alignment at all times. You have the vision of why the company was created, that’s your vision of the worlds five to 20 years in the future, what that looks like, and that’s where you wanna get to. I think it’s really important to have that. I think oftentimes it’s wrong your vision, but it’s still important to have it, because that’s the bet that your company is betting on. If you’re trying to build a scalable business, you need to have that bet and you’re gonna be wrong and we see companies like Quirky, whether or not they failed on execution or on vision, whatever it is, the point is that they were wrong at some point there, but the only chance you have to succeed is the chance to be wrong there. You have to align with the vision, and then the second element is you have to align with… Like I was saying before, the end-to-end use case, the line item. Ideally the greatest line item on the budget aligns with building a solution that also works with your vision, that’s the best alignment. We built the highest revenue opportunity and also get to realize our vision. That’s unlikely to be the case in the steps up to your vision. I would bias toward vision first and revenue second, and you first prioritize by all the solutions that are in line with your vision, and then prioritize by the revenue opportunity of those solutions.

Daniel:

Awesome. I guess there’s always this kind of issue that can come up with this. Maybe I found something that seems to fit the market but isn’t actually fitting at all with our strategy, it’s not even slightly misaligned, but completely misaligned with what you’re trying to do. Is that something that then fits back into how you view the vision, or is it something that you consciously put off side and say, “Well, yes, this is not something that we’re gonna go for right now.”?

Nis:

That’s the great question. There’s two types of pivots, a pivot in execution to realize your vision and a pivot in vision. Obviously a pivot in execution is something we just talked about, it still aligns with your vision but it’s a different way. If it’s a pivot in what the actually business is, it’s like starting from scratch. Your question really is, “Do we wanna start a new business?” Not “Do we want this business to go in that direction?” 'Cause really you’re starting a new business. That’s a question that involves a lot. And you see it all the time. Groupon went through that, what they started as was completely different than what they ended up as and I’m sure there’s numerous companies that went through that. That’s really a decision if that’s the market you wanna be in, again, I would say that’s almost not purely a product management decision.

Daniel:

Yeah. What kind of signal do you need in order to say, “Yeah, this is definitely something that I should be listening to,” or not, because it’s very tough when you’re presented with something that is completely to the side with what you’re doing, you already have customers, you know that there’s growth opportunity there, but there’s a new thing that might come up, and there’s a… When you look at products like Intercom, they have this segmented platform with very different use cases, and you can use them, and you can pick and choose, and you can all go for the entire thing.

Nis:

Sure.

Daniel:

It’s very tempting to go that route, but it is also very confusing in terms of what the product actually is, right?

Nis:

Sure, so, okay, so that seems like a slightly different question, I wasn’t understanding that your initial use case still had customers. I thought your initial use case en route to your vision was not good, but a different use case that’s not en route to your vision, you discovered is good.

Daniel:

Yeah.

Nis:

If that’s the question, you’re starting a different business.

If the question is, you already have a use case… So one of the ways we think about products is, we think about capability, and we think about apps. So you enable an end-to-end use case. By doing so, you have this capability in-house, maybe that’s generating user insights. For Uber, that’s logistics, right? Your logistics is the capability, the end-to-end use case might be getting someone a personal driver, it might be getting them food delivery, it might be getting them any sort of delivery, puppies, or something they recently started. They bring puppies to your office. I don’t quite understand that one, but if you think about it, that’s just an app on top of the capability. So it really depends on your company, if you are a capability. So for Intercom, I would say they’re very much a capability, their capability is two-way communication via your website.

Now, that capability can enable multiple use cases, or apps, on top of it, in which case, yeah, go for it. Once you’ve got the capability nailed down, I would keep spinning up apps on top of it, figure out what each app’s use case makes sense, and iterate that way. Not all businesses work that way, plenty businesses don’t have some kind of capability ecosystem that they could build apps on top of. In which case, you need to be very careful when splintering the product direction there. And I would really urge, if that’s the case, to stay focused. And you’ll always come across different opportunities. One of the hardest things as an entrepreneur just doesn’t, as a product manager, in a lot of similar ways, is saying no. 😀 A large portion of your job is saying no. Not just figuring out the way to go, but figuring out all the ways not to go, 'cause there’s infinitely more ways not to go than there are to go.

How to figure out when it’s not worth going in? It’s similar to, even the whole entrepreneurship, if you wake up every morning still with the same thought, hey, maybe it’s worth pursuing. And if, after a week or two, you’re kinda over it, well then, that’s a good gut check.

😀

Daniel:

Great. So, yeah. As you move forward and start working on something new, actually try to build the feature, how do you get customer input into what you’re building? How do you try to see if what you’re building is the right thing, if it really fits their needs? What kind of methods, tools, and tips work for you?

Nis:

Yeah, so, I definitely still go back to being able to enable it manually, being able to enable the solution. I think if you start with, “How can we do this without a single line of code?” Your initial thoughts are like, “Well, that’s not possible at all,” right? But in time, you realize, like, “If we really get crafty, we could probably figure out, end-to-end, enabling this for the client, without spending that much money or time.” And it’s not a waste, it’s just a matter of saving resources. It’s really a matter of… There’s was a good analogy that Josh Wexler, who’s now the Head of Product at Yieldmo, he gave, he was on our podcast, as well.

Nis:

He said something like, “Obviously, pen and paper is, you can’t do nearly as much artistically as any basic tool invented since 1999, right? But at the end of the day, pen and paper, if someone gives you feedback that they don’t like it, you crumple it up and you throw it out. But the second you put something on your computer, a wireframe, Balsamiq, whatever it is, and someone says they don’t like it, you start dragging it differently, you start doing something else, right, you’re not really gonna throw out the whole thing, right? You’re not gonna hit delete. But pen and paper, you’ll just crumple up.” Right, so why I think doing something manually is so important is you don’t have this tangible infrastructure that you built, that you’re like, “I really don’t wanna scrap that.” Because if you’re doing it manually, there is nothing to scrap. Your just next iteration of it is now done differently, so that’s always the best way to do it, is do it manually.

It could involve code, manually could mean spinning up a WordPress, it could mean building a fake interface, it could… I’m not saying, “Don’t use electricity.” I’m just saying like, “You shouldn’t need an engineering team and a spec.” Once you start spec’ing, you’re not doing it manually. So that’s what I would say, “Always try to enable it manually, you iterate in real time without leaving behind an infrastructure. And see if you can learn a lot from doing that.” An interesting one I see is magazines. One of the ways they iterate on cover designs is that in an upcoming issue, they’ll send a slip inside that issue for customers to order a future issue. And they’ll A/B test the covers of the future issue in that upcoming issue, whichever one performs better, they’ll use that. That’s such a great way to test without doing, really, investing anything. So we try to think about that, if you start from that point, and you can’t figure out a lightweight test, okay, then you can move into prototype designs.

And if that’s the case what we’ll start with… We’ll test high-level value propositions usually with one to two page apps. One of the big things we’ve done if we’re getting into prototype designs is we’ll test the onboarding flow before anything else. And that’s a really interesting way also to start with the solution rather than the other way around. We’ll build that like a four, five page onboarding experience for a website, or an app, or a feature, whatever it is. And we’ll have the user go through that and then we’ll get their feedback based on that, because that’s their first interaction with what the product would be anyway, so we can learn a lot from how we would onboard them on to a supposed solution.

Daniel:

What does that feedback look like, is it a survey, is it an open-ended question?

Nis:

Sure, sure yeah. The way we do it is… There’s three ways to do it, right? You can do qualitative, quantitative, and then we’ll call it a blind or like a comparison method.

You can do qualitative… Here’s a way we think about it, basically we ask people questions before the prototype, put them through the prototype, measure their behavior while using it, and then ask them questions at the end. And we’ll try to mix qualitative and quantitative. I think that there’s a few good strategies here. If you really don’t know anything about this supposed solution or what the feedback is gonna be, I think always start with qualitative. It’ll at least gives you some context. Have a deep interview, maybe 10 to 30 minutes, really trying to understand the problem a little bit more and start with qualitative and then move to quantitative. Once you understand the problem, theorize the solution, test the solution, and quantitatively see if it is the solution. On the other hand, if you do have good context and you do think you understand the problem set, start with quantitative and use qualitative to understand why.

You understand the problem, prototype a solution, get quantitative feedback or whether or not that is the solution. And then to put that into context, use a qualitative method to understand why. The one I was sent blind, the comparison is one qualitative and quantitative separately and then compare results. That’s a really interesting one. All of these, of course, are always staggered at the same time. Anyone who lays out a whole plan for like, “We’re gonna these 10.” I don’t think that’s really great way to do it. I think it should always be… First of all, it should be continuous. It should be ad hoc and continuous. And always iterating and substantiating through iteration, is the way we do it. There’s not like, “Hey, this is Qual. Hey, this is Quant.” There’s always a good mix and we’ll back and forth between the two, seamlessly.

Daniel:

And at this stage when you’re not having usage metrics, what does quantitative look like? Is it scores, is it…

Nis:

We use a combination of metrics we call a perception of value. There’s two planes here, right? There’s two coordinates… It’s escaping me, the word. But basically, you go through perception of value, which is how well something is perceived. And then you have simulation of value, which is how much of the value, like how true the enablement of the value is. For example, if you show someone a napkin sketch and they like a lot, the perception of value would be very high but your simulation of value would be very low. They haven’t actually gotten any of the value of it. When I talk about enabling something manually, how much of the realization of the value have they got, right? We go along those two metrics. So the perception of value is a combination of metrics. So those might be, “How easy to use was this? Could you foresee yourself using this on a weekly basis? Would you recommend this to a friend? Would you pay $10, or whatever the dollar amount seems reasonable would be?”

I know the dollar test is pretty big, “Would you pay a dollar to use this?” We combined those metrics into a perception of value and then the qualitative stuff is like, “Why did or did you not like this? Is there a solution you’re already using for this? How could we make this better? What were the challenges?” Etcetera, etcetera. And then the other quantitative aspects is actually what they do in the app. Did they click the button? Did they not click the button? Did they finish this workflow? Did they complete this objective? That’s more of an ad hoc basis. There’s three elements we look at: The qualitative feedback, the quantitative feedback, and then the behavioral data.

Daniel:

Great.

Nis:

It’s very robust, I would say.

Daniel:

Yeah, sounds like it. 😀 So the product is now out, you’re servicing it, it’s out and running, and you start getting a bunch of unsolicited feedback. You got feature requests, you got ideas coming from your customers, internally and all of these can be hard to make sense of. People may be asking for one thing but actually need something different. How would you follow-up with these kind of things and how can you try to understand what’s behind them?

Nis:

I’m gonna comeback to the podcast years because Steli Efti, who’s a really good sales mind, had a really, really good talk. And he talks about how product management is kind of a capability or an expertise, or a knowledge set. The product manager isn’t the only one who does product management, it’s like the whole organization. So he talked about is… And I can validate this with the recent report we did, which is that the second most common channel for user feedback comes from sales, the first being surveys, and the second being sales. This unsolicited feedback isn’t just something that that sometimes happens. It’s maybe primary, one of the few primary sources of customer feedback.

So the first thing you can do as an organization and this is what Steli was saying, the first thing you can do is empower and inform yourself, your marketing, your account management, your customer support people into some of product management best practices. How not to ask leading questions, how to better understand the use case you’re enabling, the real problem and really digging deeper at what the customer says and all that. That’s the first thing you need to do, because if you’re playing broken telephone with people who don’t really understand the fundamentals of product management, you know the guarantee that you’re gonna make mistakes. So the first thing you do is make sure that your sources of this feedback are well-qualified, and capable and empowered to give you this feedback in a useful manner. So that’s the first thing.

The second thing is that… And this I mentioned earlier, is that you don’t wanna make any promises to users, that’s the first mistake a lot people make is. Your gut reaction is, “Oh, that’s a great idea.” That’s not necessarily the best reaction to have. Like, “Oh, that’s interesting.” Why is it… Getting, digging deeper. A lot of times, customers they feel the need to make recommendations and they just like feedback and you realize they have some other issue which may or not be dependent on your product that they think it’s your responsibility to resolve. And it’s not your responsibility to solve. Sometimes it is your responsibility to solve and you need to figure that out, and the best way to figure that out is to empower the person who has the direct communication with the customer. Having said that, now let’s say you do get feedback. I’d treat that feedback as any other feedback you’re getting in any type of experiment. If you can validate that the feedback leads to a validated opportunity, then it’s great. Someone gave you some free insight, that’s the best type of insight you can get in my book. But always make sure that you’re filtering that feedback into a usable format. I would say that’s the biggest challenge a lot product teams have is that they get this unfiltered… Not garbage data but really, really hamstrung data.

Daniel:

Yeah, I think there are two… There are multiple levels of problems here. You have the lack of context among the data that’s coming at you, you have the lack of searchability or history to know when was it and is this something that is a pattern that is coming overtime or is it something just that happened and it’s no longer happening? There’s a lot of context around this kind of things that isn’t coming through over to the product team, right?

Nis:

I know a lot of product managers who unfortunately… Because maybe it’s not their capability, they haven’t done it, they haven’t talked to sales and account management about how product management works, that literally sales people would come to them and say, “Hey, someone mentioned this feature.” And they’ll just be brushed off. The product manager will just literally just brush them off. They won’t remember it, they won’t save it, they’ll completely ignore it. So definitely I think that the keyword you said there is patterns, which is that once you’ve ensured that the feedback you get is contextualized and that sales and marketing and whoever else are empowered to get user feedback and make sense of it. Then the next thing you’re looking for is patterns. Are you finding patterns? Is this a common use case? Can you tie it back to an opportunity? That’s what you’re looking for. But yeah, you need to get the data in a clean format first.

Daniel:

Yeah, now moving on to a different subject. What kind of metrics translate to you if something is working or not? What sort of thing tells you that a feature is or isn’t succeeding?

Nis:

Yeah, so, this is a recurring theme and a lot of… We’ll talk about as well. Obviously analytics, like quantitative analytics, usage and engagement and all that stuff. I’m not saying we don’t look at it… Dashboards and all that are great. I have a rule, we use this for our marketing and for our product, is that I never want to discover something from my dashboard. My dashboard should confirm something I know, but if I’m learning something from my dashboard, I’m not doing my job. So what do I mean by that?

We’ll get to product in a second, but before that, like the podcast. If someone asked me, “What is the best episode you guys have done?” I don’t want to look at the dashboard to know. I should know because I talked to so many of our listeners that I know what everyone is raving about. And I do, I know the three or four episodes that people just loved. So that’s the way we think about it with product too. What does it look like when you’re building something that’s awesome? Well, people wanna do case studies and they wanna give testimonials. They want more of the product, they want to talk to you more. They wanna come to your events. There’s so many… I mean, we’re in a human business and I think one of the mistakes we make when doing research is that we treat the research we do… Not all the time. But in a lot of this field. We treat it almost like academic research, the same way you might test on molecules. Things that can’t speak back to you and explain to you why they’re acting the way we do.

We’re working with people. Your customers are people, just talk to them. And understand, they’re all unique and they’re all the same. They all have for the most part very similar concerns and… One of the things I love about cop shows, is that whenever there is a crime, they always at least put together a story. Like, “Oh, what was the motive? Maybe he was going here and then he went there. And then, for whatever reason, they got into a fight.” But they always put a story together. You should always be able to do that in product management. You’re not selling to blades of grass, you’re selling to humans. There should be a reason that you can understand for why they do and do not want something.

Daniel:

😀 Totally agree with that. How can you try to put forward both the quantitative and qualitative data after you’ve shipped? So you’ve got some… You’re talking to people… You’ve got your analytics, and how can you put them together in a way that makes sense? What kind of questions does the quantitative answer for you?

Nis:

Yeah. Now that’s really good. You do get the cognitive dissonance there a lot. Like people say they want one thing, you ship it, they don’t use it. They say they don’t want something, but you ship it and they do use it. I don’t know if I have necessarily a framework for that. I would say we probably handle those on an ad hoc basis. Again, I don’t really have a way to abstract the… What we look at there. I think again we go back to the use cases it enables. We’re in the business of enabling our customers to be better product managers, right? That’s the business we’re in. If we feel like something we’re building that’s been used doesn’t do that, we’ll probably second guess whether or not we should be building it. And if we’re building something that should do that, but they’re not using it, then I think we’d probably fight a little harder there, to figure out why we dig deeper in qualitative, and we realize that maybe their job… Even though they know it would be better, they’re not incentivized from their organization to do it. That’s a very, very common problem with B2B, that the best practice doesn’t necessarily have the incentive.

Sometimes what we’ll do there, is we’ll actually go above the user. We’ll go to the organization and we’ll talk to them about best practice, and we’ll try to get the buy-in from the organization. That’s actually a very common… Especially in product management, the best practices and the discipline often do not align with what the organization, how that operates. So we’ll go above the product management layer, we’ll go to… Maybe it’s the CIO, maybe it’s the CFO. That’s been an interesting one, it’s the way a company’s budget actually prevents experimentation and iteration often. We’ll fight harder when we qualitatively know something should work, but quantitatively can’t make it happen, than if we can’t figure out why something is popular at all, but it’s popular. Like I’ll fight much less hard for that.

Daniel:

Now, you already touched on this a bit, but what other things do you ignore? What’s noise to you? What do you consciously don’t look at?

Nis:

Yeah. I think there’s a huge push for design, and design feedback, and user experience. And all those things are super important. And I would never, like… Design is important. You can differentiate your entire product based on just simplicity and design, and those are all great. But at the end of the day it really… Again if you’re in B2B in particular, it needs to go to the use case. It needs to enable them to do something. Anything that’s just, “Hey, it would be nicer if this button didn’t look like this, or was a nicer color.” Even, “I had these three pages, like it’s annoying to fill out this form.” You know what, the fact that it’s annoying and you’re still doing it, is so important to me. That tells me that this use case is more important than the user experience. And that the user experience… I can always get to the user experience. I can always get to the design. These things are really important, but I, at least my point of view, I would never necessarily want to be competing based on design.

Like if that’s the only thing I have over my competitor, couple style sheets that you import. And I don’t want my business disappearing 'cause my competitor learned CSS, right? So it’s not noise, but it’s secondary to the use case. It’s secondary to what I enable them to do. And if the only way to enable something is through design, then maybe that’s the case. But if design… If all I have left to work on is design, then I think I have larger issues.

Daniel:

I was interested in how can you segment your users properly? I see a lot of people that are maybe looking at things without the filter of, “What kind of user is giving me this piece of feedback?” It’s a big part of the context that is not there. Sometimes it’s not even just the persona that we’re talking about. We’re talking about maybe the point in the life cycle that they’re at. Are they trialers, are they on their third month, or their 22nd month of using the product? And that kind of thing.

Nis:

Yeah, those are… Segmentation is absolutely necessary. That should state even before anything else I’ve said, is the need to put all that into context. And earlier I talked about the organization needs to put their user feedback method into context. We need to put into context who the user is. There’s a lot of different ways you can break down. I can speak to B2B products, that’s where most of my experience is.

Onboarding. You know, there’s two different types of software, right? You have opinionated software and non-opinionated software. And what I mean by that, is that you have software that enforces a methodology, and I call that opinionated, has an opinion over what the methodology should be. And you have software that does not enforce methodology, that’s an un-opinionated software, right? Opinionated software has a much longer onboarding process. Because you’re actually introducing a new workflow or methodology to the customer, that they’ve either bought into, implicitly, explicitly, or maybe they haven’t even bought into it yet, right? They just have to use this product for whatever reason. There your segmentation is often, how familiar are they with this product? And the feedback you get from them will be so different, whether they bought into the methodology or not. So that, a great example like HubSpot, enforces an inbound methodology. Salesforce, not necessarily. There’s a lot of different use cases you can do in Salesforce. You know, it enforces customer relationship, but that’s not really a methodology at this point. That’s just kind of… That’s everything, right? So you need to build your own workflows within there.

But like, Pipedrive and some of these more niche CRMs do enforce more of a methodology. And there’s only so much you can do with those tools. So the way they would segment is based off of how bought in or how, you know, evangelized the customer would be. In our case, we segment along that regard. We kinda have this… We’re still trying to figure it out, but a three-month onboarding process to the point where we see the way a customer who’s three months in uses our product, versus a customer who’s just onboarded uses our product. Totally different ways. The feedback is totally different. Often a lot of the concerns they had on day one are just gone by month three. They may have other concerns, of course. But their whole way, their whole perception on things is totally different.

The other way we segment is again based on the use case we’re enabling. You know, if we’re working with an innovation lab, there are use cases that they are actually kind of a client… Their clients are other departments in the organization, right? That’s the innovation lab. Other departments in the organization submit projects to the innovation lab and the innovation lab provides a capability to those clients, right? Or we’re dealing directly with a client who’s a product team and they need do it. And those use cases are different and, therefore, the future sets are often different. Well, we’re always talking user insights but the format they need those user insights, the cadence, they need those user insights. Everything there is different. So we segment along those lines. And I think a lot of airlines are one of the most common. Is it your business customer? Is it your leisure customer, right? And that’s how we’d say, you know, they’re both on the flight and they’re both going from New York to Los Angeles, but their likelihood to have the same feedback is very different. They have very, very different concerns, who’s paying for it, etcetera, etcetera.

Those are the two most common, at least in my world, that we segment, based on how long they’ve been using this product and how bought in to the “opinionation” of the product of they are, along with the use case they’re trying to solve.

Daniel:

Great. So we tend to talk a lot about positives. Can you tell me a bit about your bad experiences on some feedback methods and channels, things that you shouldn’t be doing when it comes to gathering and using that customer feedback and data?

Nis:

Yeah. Silicon Valley has the phrase, to paraphrase it, whatever you optimize for will go up and to the right. So the whole concept of ‘fail fast’ or if you wanna fail fast like, trust me, Fortune 500 or anyone, any high paid intelligent person will figure out how to fail fast. So but ‘fail fast’ was a methodology, but that wasn’t really the goal. The goal was to learn. So one of the things, and this is what I talked about earlier is, what are you solving for? Do you want to build great products? In which case, you need to learn about your users. Or do you just want to iterate? Cause you can iterate all day. You can make a million iterations if that’s what you wanna do. You’ll just go nowhere.

So that people think in terms of failing or planning. And those aren’t the only two options. There’s an element of learning that doesn’t involve failing. So one of the mistakes, I think, that I’ve made many times is I’ll always think I don’t wanna fail here so let me go plan. And you can get stuck in that really quickly, like, “This feature looks really cool. I could envision people using this feature with another feature. Let me spec that out real quick”. But I could envision this whole like zigzag of use case that probably doesn’t exist at all, right? Like creativity is far over-rated in product management. You envisioning people use it is great in your head. And we get stuck in between these two things of, “Okay, well, I don’t wanna fail, so let me plan.” But there’s really like this huge area in the middle where you can learn, where it doesn’t involve failure and it really doesn’t involve planning. So that’s if you optimize to fail, you will fail, if you optimize to plan, you will plan. But if you optimize to learn you’ll realize there’s a lot between those two.

Daniel:

Yeah, that’s a quote right there. We talked about, already about working with other teams. Any other tips and things that you would like to mention on how you set up processes and work with customer-facing teams and make sure that the flow of information goes both ways and also it’s productive?

Nis:

Yeah. Yeah, yeah, definitely. So I’m definitely not the first to say it but, product manager leads by influence not by authority. I think it’s true of every position, really. Even if you’re the CEO you should lead that way. If you wanna build a product culture and oftentimes it’s… You would like to think it’s coming from the C-suite but oftentimes it comes from the individual product manager. And that’s fine. I don’t think there’s a problem with that. In fact, if anything, it creates a good challenge that a product manager should have to deal with. You need to really evangelize, like why the company does better and not just the company but each individual stakeholder does better when you optimize for learning and experimentation, why that benefits each individual person. From the finance team, why that de-risks what they’re doing? From the account management team, why they can be empowered to learn and be a great resource to the product? You empower these people. It’s really the most important thing that a product manager can do, is making sure the context and the culture is there, that enables them to get user insights to experiment. Because you’ll never get in touch with the user if your account manager doesn’t even get why you exist.

So all these things need to be presupposed on the notion that you have and are on a continuously building influence in the organization. It’s not just like take an interest in what engineering does. I think one of the biggest problems, like so many companies do, is they treat development like royalty and then it creates this whole unevenness across the rest of the organization, like everyone needs to be empowered and everyone needs to be aligned with what makes the organization successful, should make the individual successful. And even if the organization doesn’t do that, the product manager’s job is to make it as so that the organization does that. So even if the account manager doesn’t have any incentive to upsell customers, in some organizations, that’s true, the product manager should make them feel better that they can upsell and they can upsell by understanding product management, providing feedback to the product manager who builds a better product that gets more upsells, and even if the account manager may not be getting a bonus, they feel awesome because they did this. So really trying to elevate everyone’s game. That’s kind of the role and that’s a really important part of cross-functional organization and working that way.

Daniel:

That’s really interesting because I mean I get… I don’t know how many emails I’ve gotten from readers that say that their biggest frustration is that the organization doesn’t get what they’re trying to do and, actually, I feel that sometimes that kinda goes both ways. They also don’t get the other part of the organization. So that’s what you all already will say.

Nis:

That’s very kind. 😀

Daniel:

So, what kind of strategy would you suggest for someone that is trying to both empower other teams and empower themselves around them? Specifically, how can you as a product manager say, “Hey, this is why I exist and this is why you should care.”

Nis:

Sure, yeah. The biggest piece of advice I will do is what most people don’t do which is they come in and they wanna make an impact. One of the first ways to assure that everyone loses… Like believes you don’t care about them or care about the organization that’s coming in and with the notion that everything is being done wrong, I’m gonna fix it. 'Cause that’s a sure-fire way to piss everyone off and whether you’re right or wrong, you’re wrong. So this concept of easy wins, just create easy wins. Easy things you know you will succeed at that will validate you and will validate the business. Maybe it’s… Like I said a happy hour, you create a happy hour, right? Who doesn’t want beer? You can pay for it out of your own pocket if you have to, and get two customers or proxy customers there. This concept of people who aren’t your customers but provide valuable feedback, have them chat with… Invite a couple people from sales, a couple people from marketing, account management, maybe someone higher up. Bring them all there, grab some beers, everyone gets drunk, they talk. I guarantee the next day the sales team will be like, “Wow, we actually learned a lot from that.” And the account management team will be like, “That was a really useful experience.” Easy win, right? Easy win. You validated the need to talk to customers, you validated that talking to customers benefits sales, can benefit marketing, can add new perspective and insight to different teams. Great, now let’s go for another easy win. And after three or four, you’ll already be the cool guy at the office. Cool guy or girl, I should say. And then you’ll have… You’ll be so much more empowered to do other things and that’s leading by influence rather than leading by authority.

Daniel:

So we’re almost out of time. So to wrap this up and kind of both summarize and include some things that you would like to add, what would your top recommendations be for product managers who are trying to use and make sense of customer data and trying to make decisions out of it?

Nis:

Sure, yeah. So the absolute first thing is not to be dogmatic and what I mean by that is always hear other perspectives on things and always try to be more robust than you are dogmatic. Apply different principles and I think I started off this talk with talking about NPS and the problems with NPS. I’m like, “Yes, it’s problematic.” The first thing you always need to do is contextualize. Where’s our organization and what does the next step look like? Where are our customers and how can I put their feedback into context? The feedback I get from the sales team, does the sales team understand product management? Can I help them better understand product management? Can I better understand sales? Account management… So it’s not being dogmatic about context, making sure that you’ve put everything into context, and then the second element is not being dogmatic about process.

Maybe there’s a best way to do things but there could be three great ways to do things and you can potentially do all three without much of an additional cost. You could do qualitative and quantitative and try to find patterns. You don’t just have to run one test and then make a decision. You could run one test and iterate and run another test. And then try to find patterns. You don’t have to disregard something, some ridiculous feedback. You can make a note of it. If you get the feedback again, that’s interesting. Get it again, “Okay, let’s investigate this.” So being robust and putting everything into context. Those are the two biggest recommendations I would have.

Daniel:

Thank you. Yeah, Nis, so this is it. I’m really, really thankful for this time.

Nis:

Of course. Yeah, no. The pleasure is all mine. I’m really happy to participate. This is always also a learning experience for me.

Daniel:

Awesome. Great, so thanks a lot, Nis.