Conversation with Shaun Russell

Conversation on customer feedback with Shaun Russell, Product Management coach. Previously Senior PM at OUTFITTERY and Lyst (at the time of this interview)

Published

Key Points

  • Creating a vision and goals for an already existing and established product that was lacking in that regard
  • How to understand what’s valuable for users in an existing product
  • How to approach the feedback process for enterprise products with smaller user bases
  • Processes to work with Sales and Support team to get the most collaborative and aligned environment
  • Tying feedback items to goals and KPIs, to understand their relevance
  • Continued tracking of users’ feeling towards the product through NPS
  • Considering both Customer and User segmentation in B2B / Enterprise products
  • Aligning internal and customer-facing product teams with customers’ success
  • The importance of having a healthy organizational culture so that stakeholders are able to discuss anything freely

Transcript

Daniel:

Can you tell me a little bit about yourself and your background in product management?

Shaun:

Sure, yeah, so I first came to product management having worked in a marketing role in a digital marketing agency. Originally there as an analyst, and got quite interested in the opportunities to kind of scale the types of things that I was doing, or do more interesting things like integrate weather patterns somewhere in the world into whether you bid for an advert or you put in the ad copy. So I very quickly became interested in kind of interesting ways of solving those problems, and that led me to doing product management within that company. So worked on that technology.

From there I went to Adthena, and was kind of… I think the seventh or eighth person that joined Adthena, and established Product as a thing that existed there. So, there was this tool that had been kind of hacked together through a variety of means, but without a product manager to overlook it, and have the right kind of process around it. So that was something I worked on, it’s a business-to-business enterprise software tool. And then most recently I joined Lyst, which is a fashion retailer that gathers together fashion listings from designers and retailers all across the world into one kind of unified shopping experience. Whereas at Adthena I was product manager for the product as a whole, here at Lyst, I work on more a internal product. So we have a department we call Acquisition Tech, so focusing on things which will help us grow and scale our activity as a company.

Daniel:

Awesome. Okay, so, let me go into a couple of things that you mentioned there. For Adthena, you were working on a not yet established product practice, and how did you try to work out the vision and the goals for the product from something that was already existing? Because that’s also a really important part into how you then look into feedback, we’ll get into that later on.

Shaun:

Yeah, yeah. So the first step in that was to take this product that existed, so basically, to go to the interface that our users were experiencing, and basically to take like an itinerary of all the different use cases that seem to deliver, speak to users to find out what they were using different features for, speak internally with staff about the same thing. Look at the data that was available, in this case, there was fairly sparse data, because it hadn’t been set up very well for tracking, but nonetheless you could see what was being used and where. And essentially build out from that a picture of, “Here are all the different things that it’s doing.” And then it’s kind of like if you imagine in a brainstorming activity, when you generate all these potential, like you’ll throw the Post-its up on the wall, different ways to tackle the problem. The next stage is about taking those notes and categorizing them, putting into themes, that was very much the next step, so it was understanding what all these different abilities grouped into, and then you try and move towards a vision from there, but of course, there’s the whole step in between of saying, “Which of these things are actually valuable?”

Daniel:

Right, exactly. So how was the conversation with your users then in order to find out what was truly valuable for them and what wasn’t?

Shaun:

So, it was a data product, so the key question was, “Is this a report that you just look at because it’s kinda pretty colors, it was like interesting to play around with, or does it have actionable value for your business and in your day-to-day work?”

Daniel:

Right.

Shaun:

That could kind of take two forms, so what I was looking for was either an indication that the thing they did with it had value to the business they were employed by, or that it had some kind of personal value to them. So as an analyst, if it saved their time, made their life easier, so looking for those two kind of categories of things. And in my experience of working at Adthena, we kind of, we managed to rule out a good probably about half of the different reports that were being used on the basis that there weren’t cohesive stories of what they were being used for or why.

Daniel:

Right, that was my next question, which is how do you go from, “Okay, here’s what this does, here’s what’s valuable, and then here’s the next iteration which is more polished, more reduced, more focused.” So, how do you work out that last bit?

Shaun:

So we have two phases: One was just in a very deliberate way removing things from the user interface. We didn’t even apologize about it. We were asking forgiveness, not permission, so we just cut a load of stuff out, and most people didn’t notice because they weren’t things that were valuable to them, or maybe some of them saw that it was kind of easier to use, a bit clearer. But the biggest step came about six months later because the first major project I took on as product manager at Adthena was to redesign the user interface. So whereas you could quite easily get rid of things which clearly weren’t of value at present were very easy to cull from the existing interface, but there were further things we wanted to get rid of that would have been more contentious to do. So it became an opportunity in creating a new user interface which was nicer to use and something that people will be happy to move across to, to then get rid of some other features that they might kind of throw, or fuss about otherwise.

Daniel:

Right. Now the second question I had on establishing the product practice over there was, how did you structure your feedback process? You mentioned that you didn’t have a right set of data and you tried to work that in into your product process, so what kind of structure and set up did you follow in order to have actionable, usable feedback?

Shaun:

So, yeah, it’s kind of a funny, it was a funny product to work on in that regard, because if you are targeting enterprise companies you have these individual customers that are high worth, but you don’t necessarily have a lot of them. So in reality, we had a customer base of, probably when I started, like 60 to 70 companies, of which many are actually agencies that run activity for quite a few companies so they have the same people working on them. And some of those 60 odd were not so engaged, so what you end up with is actually a small number of people, and the fastest way to gather feedback doesn’t need to be systematic at that scale. You just schedule meetings and you chase them up by email, and you get them in the room, and you go to their agency offices and you just talk to them. So it’s kind of a strength of… Like it’s easy to get through the customer base because there aren’t very many of them. 😀 And also at that point, partly 'cause I was very fresh to product management and also partly because I think that… I’m not sure it would’ve been efficient to add structure at that point, but my attitude was just, “I need to get in conversations, I need to get into people’s heads and talk about it.” And perhaps later on, when you have a clear idea of where you are going or you have more customers in terms of number, then it makes more sense to have a very structured way of approaching those kind of interactions.

Daniel:

Right. I’ll dig into that topic because the fact that you have a few customers means that probably quantitative methods are not the best fit for you, so how would you structure your interviews? What kind of questions would you ask?

Shaun:

So, in terms of the product as it existed at the time, the questions would be: What things are you using and what are you using them for? So, asking what kind of behaviors they were doing with the actual features themselves. So with reports you’re expecting them to perhaps download the data or schedule reports and then beyond that you are thinking that they’re either gonna use that information to change what they do in their campaigns or perhaps to inform their overall performance, or as part of their monthly meetings or quarterly meetings. So, the main goal of the interview in terms of how they use the product was to drill down to, what is that use case, where are they taking our product from and to, and whether they reach for them.

Daniel:

Yeah, so did you ever have an issue reaching customers at Adthena because they were siloed by maybe sales or account managers, or was it something that you didn’t face at all?

Shaun:

No. If anything, it would be the opposite way around. So, if sales or account management were in close contact with them, then it would be really easy to get in front of them and actually you can… I ended up playing a kind of role of, the product manager’s someone you bring along to kind of get the customer engaged because they know they’re speaking to someone who might be able to give them what they need. Whereas sales is kind of, it’s like they can only answer questions to a certain point. So, the only people that would be hard to locate or speak to are the same people that sales or account management would have been struggling with.

Daniel:

Which were probably customers that weren’t so easily accessible?

Shaun:

Yes. So maybe people that you wouldn’t want to speak to. 😀 So that’s not… Obviously that’s not an ideal situation but actually in a lot of those cases, we were quite… And because of the people that were in that company at that time, there was a strong trait of banging doors down, so if you didn’t get an answer from someone you just kept on trying, kept on trying. And if you’re coming with a message of, “We see you’re not happy, we’d like to learn more, we’d like to help you”, then most people have time for that.

Daniel:

Right. Okay, so that relationship with those customers was ongoing, it wasn’t a one-off sale, it wasn’t a service product, so that means that you would have a constant point of contact, right?

Shaun:

Yeah. They were very much serviced… The sales team or account management team, yeah.

Daniel:

Okay. Even when you’re not actively going out sales and support and account managers are reaching out to customers everyday or…

Shaun:

Yeah.

Daniel:

Do you have any process that you would have between those teams and yourselves, in order to try to get good feedback from customers?

Shaun:

Yes. So there was… We ran daily support catch ups, so on a most practical level, if people would come and query and said, “This thing doesn’t work,” or, “It doesn’t do what I expect it to.” Then I was generally quite aware of what was going on and we also shared… So, we used Zendesk and Intercom. So through that I just had visibility of a lot of the conversation that was going on. We also had, often weekly, although as we grew a bit, it became more like once every two or three weeks, we’d have all-hands product meeting where I give an update on what’s going on with product. As a part of that, a member of our account management team, would share some of the feedback they’ve seen in that past week or so, and so that became another point of interface. And the other thing is that we were in the same room. 😀 So not everyone has that luxury. We’re talking all the time and it was very much known that we all really cared about the end-customer and we wanted to know details as a kind of a curiosity that we shared. I think that’s part of the… When I talk about trying to establish product, as a thing in an organization, part of it is encouraging people to have that curiosity, and to talk openly about it and being approachable.

Daniel:

Right. That leads me to another issue, which is… Okay… So they’re having that conversation, and you’re all having that conversation. But usually, the kind of input they get is mostly superficial in the sense of future requests, or maybe something that the customer directly imagines out of their own experience with the product. So in those cases, how would that work out? Did sales, and support, and account managers bring in a little bit of the product manager spirit? Or would you do that directly? Or something in between?

Shaun:

Yeah, I would… So there’s probably a degree to which… I can’t give you an accurate answer to this 'cause I can’t tell you the number of conversations that happened which could’ve involved me but didn’t. But certainly, it felt like when it came to the point where it was that discussion, they’d bring me in. Usually, it would still involve them, as opposed to a handover, so that’s kind of you’re in between, right. It’s still their meeting and it’s still their client, but I’m there to answer the more product-type questions, and to ask the more product-type questions as well. Yep.

Daniel:

Awesome. Let’s maybe try to now go back into more of a tactical view of day-to-day work and walk me through your process of how you deal with unsolicited customer feedback.

Shaun:

So, we’d have… We actually used ProdPad at first and then Trello, essentially as places for the feedback to go. So anyone on the team could log a new card on the Trello board, it would go… They could do it straight from Slack actually, they’d just type in a command and it would send it over. And then I would go through items that came into this. It was kind of like a triage process, like which things does it belong with? Is it a new theme that we haven’t spoken about before? Does it match up with a new feature we’re building or whatever it is, and I’d kind of move things into the right categories.

In terms of then understanding what was important and what was relevant, the way I kind of think about it is that any piece of feedback should, if it’s meaningful, relate to some kind of metric. That doesn’t mean that you’re necessarily good at measuring that metric or have a good handle of it, but you should have a picture of the conceptual link to why it matters for the business where it links in. So, the decision of where to put my work and the business’ work is largely led by that picture of, what metrics do things tie into come downstream, and then what things do they bubble up to? So for us, retention and acquisition were two really key focuses. And pretty much any feature or any piece of feedback seemed to really connect with one of those two. So that very much guides your focus, like which ones seem like they have the strongest connection. And if possible, if you do have quantitative data which demonstrates it, then that’s even better. Yeah.

Daniel:

Right. Qualitative data for you, is it time-boxed to the current moment? Or is it something that you go back and go to the history of the product and try to use? Is it something that is time-sensitive to you, or isn’t it?

Shaun:

It’s time-sensitive, but it doesn’t expire.

Daniel:

Right.

Shaun:

So you have to understand the context in which that feedback was given, and that’s not just about a product changing, but also about the account management changing, and the branding of the business changing, and all those kinds of things. But having a view of the story of the products, where you’ve been from, where you’re getting to, is a really important part of then interpreting that feedback. And the closer it is to the present day, probably the more valuable it is, is a good place to start.

Daniel:

Right. That leads me to another thing, which is the quantitative look at how the product is doing. And you can have both, like a macro view of things if… Are you moving the needle towards the metrics that you’re trying to target? But it also comes down to individual features that you’ve recently shaped, maybe.

Shaun:

Yeah, yeah.

Daniel:

So let’s start off with the micro view of that. How can you measure success count, how do you track the success of the product in a way that makes you confident that you’re doing the right thing?

Shaun:

It was my view that taking the macro approach, it depends where you draw the line between macro and micro but taking the macro approach wasn’t that useful for Adthena. As a product manager it wasn’t very useful, so ultimately retention and acquisition were the two things that we were really closely concerned about improving and building, but there are just way too many external factors to make that meaningful as kind of a needle to follow the product. So I very much cared about them but I didn’t use them to track product performance. I would look more at micro things, like cohort behavior was a very strong one.

So, returning patterns, particularly within the first month from onboarding, as we had a picture of onboarding would take a month and would require certain behaviors, so we’d look at that being matched, and then throughout the year on a monthly basis whether users returned then. So that’s still very distinct from retention, which is ultimately about someone who signs on the dotted line once every 12 months. But we’d see that alongside other behaviors. Things like… So one of our main reports was a market traffic report and we’d be looking for the proportion of users that downloaded data or either downloaded or scheduled, so did one of the key success behaviours that you’d expect from that page. And you kind of collect those things up in aggregate, and in theory if you move the needle on those metrics, it should push you towards better retention and also sometimes, better acquisition.

Daniel:

Right. So how do you figure out which are those metrics? Is it like a hypothesis that you establish at first and then you confirm if that micro metric might fit into the global metric or is it something else that you try to follow?

Shaun:

Yeah. Given the level of data available, like referring to the challenge earlier of being B2B enterprise, it means that you can’t verify in a statistical way, or at least very rarely could you verify in a statistical way what success behaviors were. It would come down more to intuition, but absolutely, I found it was essential to be quite clear about which ones were the most important. You need to have a good… Kind of a clear picture of what the logic is behind something being important and then the quantification that you do have should back it up, but it won’t demonstrate it on its own.

Daniel:

Right. So at any of these points do you use any kind of survey mechanism? NPS or something like that?

Shaun:

Yes, so, we use NPS. We… So we do that at Lyst and Athena as well. And we have lots of other kind of informal surveys which would’ve gone out around particular features or things that people have been trialling, we would send a survey out which is kind of rate the following things on a certain scale, give responses. But NPS was probably the most, it was the one that one that was sent to everyone at certain frequency.

Daniel:

Right. Walk me through your process of using NPS. Was it something that was scheduled, periodic, or was it something that was contextual to a new release or something like that?

Shaun:

It was independent of any particular feature. It relied on a user having not been entirely fresh and then what we were trying to do was survey them at a random point within their first six months and then continue to do it once every six months approximately.

Daniel:

How would you then use that input into then trying to figure out if something needed to be adjusted or not? I mean, out of that number, what was your action next?

Shaun:

Sure. You have the ones where they specifically leave written feedback, so on the most basic level that’s just extra feedback that becomes part of your everyday discussion and becomes logged. But the bigger picture is that you would look for either bits of feedback which seemed further questioning was worthwhile, or scores that seemed unexplained or unjustified. So you’re really interested in the people that had tens or nines and haven’t told you why or any of the detractors by default you want to kind of follow up and ask questions.

Daniel:

Did you map it to any other metric that you were seeing on the product? I mean, trying to find inconsistencies between the NPS score and they’re behavior on the product?

Shaun:

Sure, yeah. I’d have loved to have been able to do that and to do that meaningfully. I’d have a lot more optimism for doing that at Lyst, just given the volume of users and volume of activities going on.

Daniel:

Right.

Shaun:

Even so, I think you’d have to be very careful about how you do it. There’s always the problem of… Well, if you have say 20 different ways of using the products and you have this NPS score, then one of those ways is just bound to correlate if you look at enough of them.

😀

So, I’m not sure I’ve heard that approach being used in a successful way. I tend to look at it more chronologically and across key… I guess, yeah, so chronologically. So as a business, when your product is evolving, you wanna see that increase or change, but also for users, at what point are they of using your product? Have they been around for a year or two? That will tell you more about referral and retention versus if they’re in their first month or so, that will tell you more about the experience of getting to know the product and figuring out how to use it. I think that’s how I tend to approach NPS. To some degree, Adthena we would do that because you’d know the customer name that comes up, so immediately you’re going “Oh, they’re onboarding” or “They’re a customer we’ve had for a long time.” But you’re doing that formally, you just kind of know the people well enough by that point that it’s automatic.

Daniel:

Okay, so let’s talk about segmentation a little bit. In which ways do you segment your users to help you out as you analyze customer feedback and what techniques do you use in order to provide that extra context to the feedback that you’re getting in?

Shaun:

Sure. Something worth mentioning quickly, but before I probably move on to the precise question is that… Yeah, so B2B software, what we had was we segmented our customers and we segmented our users, where you have agency customers and you have direct customers, and then you have the serious enterprises and then the “sort of enterprises,” 😀 I guess is how you’d put it. So there is that kind of divide and you do get variations in behavior amongst those. But the biggest, most valuable way of segmenting users was actually at a user level, as opposed to customers. And that was looking at basically two types. You have decision makers, who tend to be more senior, more experienced. They would use the product less frequently. They would look more for ways to have scheduled reporting or dashboards, automation of some sort. And ultimately, it’s them who will make or break the continuing use of the product because it’s down to them whether the contract is signed for the next period or not.

And then we’d have users who we call analysts I guess, as a category. They would use Adthena far more frequently. They would become very skilled users. They’d find out ways to use the product in more novel ways or break it or do interesting things with it. And if you keep them happy then that bubbles up, but they are less directly related to whether the product… Whether they’re gonna keep the product and pay again for another year. So they’re a longer-term investment, both in the sense that one day they might become decision makers, but also because the decision maker will look to them and see their usage as evidence that it’s worth it or also they’ll benefit from their usage.

Daniel:

Alright, was it hard to balance between users that were non-decision makers and users that were decision makers? What kind of feedback would you prefer and which would you listen to?

Shaun:

I think both are really important. I see that the analysts are closer to the reality of the product. Their feedback will tend to be more in touch with what the product actually does, so they’re closest to the reality of the use cases, but yet at the same time they might want to use the product in all sorts of ways that ultimately a decision maker won’t care about. So you can create value, but not value that will bubble up. That’s the risk with them. The feedback conversations are often very productive, but you’ve gotta be wary about whether their use case will actually be meaningful to the decision maker.

And then the challenge with the decision maker is more that they are very detached, and the things they’ll asked for often don’t even have a use case, so they’ll want something just because they have a gut feeling this product should have it. And they’re not using it every day, so they don’t… It’s not… They can miss things along that process, so you have to be very careful about what they ask for in the sense that you need to validate that there is actually a common-sense use case underneath. They have that reverse pattern, and I would say that it would be the wrong decision to focus on one over the other. You wanna be talking to both of your clients, and their pictures should be meaningfully coherent or incoherent in an interesting way, or challenging to change what your product does.

Daniel:

Let’s try to go a little bit into your current role, which is an internal product. Can you describe, more or less, what the product does and what you’re trying to do there?

Shaun:

Our team’s called the Acquisition Tech Team. It relates to another internal team, which is the Acquisition Team. What Acquisition do is they essentially bring traffic and users on to Lyst. So it’s their job to bring users on and it’s our job to build products and technologies that will help them scale them that up and ultimately do that at a pace which our competitors couldn’t without the same kind of internal tooling or investment.

Daniel:

Right. Within that context, where do your requirements come from? Are your customers only internal teams or do you have any kind of exposure to what’s going on outside and then try to decide what tech you should be building?

Shaun:

So, for something to be… For something to matter to the Acquisition Team, it would have to affect users in some way. So that might mean that it will affect what users see when they’re on Google Search and they see an advert of some sort, or it might be it will affect what users will see on site and that will therefore encourage Google to rank the page in a different way. But essentially there is always a user involved. So we see our internal stakeholders as the primary stakeholders but the users are always there. And in fact, part of the purpose of our team, in my opinion, is to establish an understanding within the company that SEO and paid search, and all these marketing channels are the user experience as much as the site is.

Daniel:

Right.

Shaun:

So we stay very closely engaged with that conversation and in fact, we’re kind of an advocate for it.

Daniel:

Right. So let’s go into user and customer feedback in this sense. So I’m guessing that you’re talking to both the outside world and the inside world in some way, right?

Shaun:

Yes. In terms of the outside world, because we have some really great teams whose primary user is actual users. 😀 So they run user testing, they run NPS, they do workshops, they do all these kinds of things. So there are plenty of ways for me to gather feedback from people. And also, I’m, like any of our product managers here, I’m encouraged to go along to those user testing workshops. I can nominate things to look at as well, if there was a particular feature I wanted to see feedback on. So, I don’t have to work very hard to be exposed to all that stuff. We also get internal reporting where all the NPS comes through. It’s great. It’s all available. It’s really easy. And for me, because it’s kind of a secondary stakeholder, we are… It’s just a matter of just exposing myself to as much of that as possible to make sure that I don’t miss a way the product is being used or being thought of.

Daniel:

Right. So, how about your primary stakeholders? Because if there were few on an enterprise product, now on an internal tool, there are probably even less, right? So how does that work?

Shaun:

So I guess there’s two ways I talk about that. So one is, just informally. When you have like two or three main people that you need to interact with and work with, then it’s about fostering the kind of relationship you have within any company. So having a proper way of working with sort of colleagues and striking up friendship, and really caring about what matters to them commercially and being in those conversations; for me, I try and speak to main stakeholders pretty much every day. It’s just an ongoing conversation which is really important. And it’s quite nice when you have so few that you can just really focus on that.

Daniel:

Yeah.

Shaun:

So that’s fun, that’s all good. And then, there’s the more formal side, which for me really comes back to things like adopting things like Scrum. So by creating process around the way your team works, that gives you a framework with which to have different types of communications with your stakeholders. So, over the course of a week, we will take in stories to refine. I will have submitted to me tickets which I will try to get down to understand from the stakeholders what the problem is that is being raised by the ticket. Then I’ll meet with the team to reach a solution which we can plan and estimate resources for. And then, with the stakeholders to prioritize that stream and then we load the sprint. So, there’s a real cadence to those interactions. And it creates an expectation of when they can come to me with different things. So then, not only in theory can it be the case that I go to them when it’s appropriate, and they can do the same on an ad hoc basis, but they also know, “Oh if I tell Shaun right now about this story, that it’s really important,” then that’ll get in before the refinement session, which means that we have it lined up for the next sprint.

Daniel:

So you touched on something there which is kind of a hard thing, which is understanding the problem from the stakeholder. Do you have any kind of particular approach to that issue?

Shaun:

So… Yes. So, Lyst is remarkably good at difficult conversations. And there’s very much a culture of, if two people disagree, or if two people don’t understand each other, they will just talk about it, and they will talk and talk until they’ve actually figured out what the other one’s saying. 😀 Actually, a good way to explain this is that on one of the walls in our office, we have the words, “Best Idea Wins” which is a statement about not being about the ego of the people debating an idea but ultimately about reaching agreement and understanding on the best way forward. We see it as a cultural value that any two people can duke it out or discuss something until they have a shared understanding of what’s going on. That, for me, creates an environment where I feel very comfortable with stakeholders to just have the conversations. It’s part of my job to make time for that as well so if it takes longer to get to the point where I understand their commercial needs and for them to get to the point where they understand our engineering or more technological needs, then we will make that time and we’ll have that conversation.

Daniel:

Awesome.

Shaun:

The people who get hired are people that buy into that and they’re also tend to be quite smart. So it’s rare that it’s difficult, but even the difficult conversations get there in the end.

Daniel:

Great. That’s really good. Maybe you can go back into the issue of user as in customer facing products. At a point where you understand there’s an opportunity that you find out about when talking to customers, how do you try to understand if it applies to the broader customer base or if it’s just restricted to a few people?

Shaun:

The most basic way for Adthena was simply to ask the customers. Again, it’s a case of a few number of customers you can cover ground quite quickly. If there are challenges with that then we’d also use mechanisms like automated messaging and Intercom for example. You just use different channels to get to people or you use… Directly we prompt the Sales Team or the Account Management Team with questions to ask people during the different parts of the process.

Daniel:

Right. As you move forward and you’re building the product what kind of feedback are you gathering along the way?

Shaun:

This starts with when you start to get to the point where you have prototypes or rough designs of what you’re building. We’d be taking that to the customers, putting it in front of them, walking them through counting user testing with the wireframes, that kind of thing. Getting feedback at that stage. Then as you start to actually assemble something which can be delivered, then… The approach that we took was to have a beta environment, where you’d set certain things to be accessible for certain kinds of users and then you’d encourage them to go in and you’d use Intercom to ask them questions at certain points. You’d schedule meetings with them a week later or whatever to catch up on their usage of it and what they found good, what they found bad, and you just encourage an open conversation and join that process.

Daniel:

Another thing that I wanted to ask you about was did you ever set up a group of special customers that you would solicit feedback from on a regular basis?

Shaun:

Yes. It wasn’t as formal as that, but we had customers that we considered super users that would often hear about things a lot earlier. They would often just be more openly solicited for feedback. People that just had a really clear grasp of what the product did and what it could do. Some of them would even regularly be sending things themselves anyway. They wouldn’t even need that kind of prompting. Again, because it’s a small number of customers you don’t have to so formally divide them out, but absolutely there was a very clear set of users who we had that kind of engagement with.

Daniel:

Another thing that might happen there is, even with a small number of customers, is that you might have the silent majority of people that don’t reach out, don’t say anything. Do you have any kind of tactic to try to bring them in and have their opinion also throughout the entire process?

Shaun:

It’s pretty hard to control with a… If you’re interacting with a business to try and get the quiet users within that business to speak up for their things because simply in terms of account management etiquette, you have a main contact that you speak to, that you organize meetings through. But a major channel that we made use of was Intercom as that obviously is direct communications with specific users and you can do that without seeming like you’re choosing one person over another. It doesn’t really have that sense of favoritism because it’s kind of automated. It’s a weird gap in between.

Daniel:

Right.

Shaun:

And often, the more introverted users feel more like responding in that kind of context. Then you set a primer for the conversation that needs to be had the next time they’re in the office and in the meeting you can ask them directly if, “Oh, you have this piece of feedback, talk me through that.”

Daniel:

Cool.

Shaun:

So that was a good route towards solving or at least helping that kind of challenge.

Daniel:

Okay, so now that we’re running out of time, I wanted to ask you about maybe some bad experiences and things that you have to deal with when using customer feedback. What kind of things would you recommend to avoid, to not do, and stay away from?

Shaun:

So, I think the most important thing is to get… So I kinda spoke about this earlier with the segmenting users, is that when you have a feature request, you need to get to the need that’s underneath it. And equally, when you’re given a need, you need to validate that that is actually related towards the retention or success of the product on some metric or not. So those are really key things and a lot of the things that went wrong were missing part of that chain.

In terms of particular bad experiences, I think that something that’s just really challenging, especially when you’re doing an enterprise product is that you end up interacting with a lot of people that want things, or say they want things, and offer a lot of money to do those things, and you have to stay really strong and really question it very deliberately. Because often, they don’t actually have the intention to back it up with paying for the product or they don’t even have a basis for what that requirement is. So I think it’s identifying who those people are, and being very careful in those conversations.

Similarly, having a sales team that’s independent from the product team can be difficult, because they will be soliciting feedback and sharing it, and sometimes guiding the conversation in a way that’s in their own interest. That’s just a general… I think that’s the danger of a good salesperson, is that they will find ways to ultimately sell more of the product, and that can involve being selective in what feedback they give or how they give it. And another thing was that, we experimented with different combinations of people in the room with the customer. And sometimes it’s appropriate to have engineers in there, sometimes it’s appropriate to have the CEO, but as a rule of thumb, the most honest conversations you’ll have don’t have the people that build the thing, they don’t have people that own the company in the room.

😀

So some of my bad experiences, I’d frame more as good experiences of figuring that out, and then having more productive conversations with fewer of the influences or emotionally attached people around.

Daniel:

Perfect. Okay, so I guess that’s it. It was really fun talking to you, and I hope it was also fun for you.

Shaun:

Yeah, it was nice. It was good.

Daniel:

Thank you very much for your time today.

Shaun:

Absolutely.