Conversation with Boaz Katz

About Boaz

Image result for boaz katz bizzaboBoaz Katz is Co-Founder & Chief Product Strategist at Bizzabo.

 

Key Points

  • The importance of going through all of the feedback you get
  • How to understand the relevance of qualitative customer feedback
  • Looking at Support and Sales as important (but separate) channels for feedback
  • How to decide which requests should be built or not
  • How customer interaction is critical at Bizzabo when spec’ing, building and releasing new features
  • Using customer advisory councils for easy, ongoing validation
  • The importance of having hard, quantitative KPIs towards which success can be measured
  • Using customers’ success goals as the segmentation mechanism for the product

This conversation is part of a series of interviews with experienced Product Managers on the topic of Customer Feedback. Listen and read on the site at your pace, or subscribe below to get a weekly email (for 7 weeks), containing selected interviews and highlights.


Audio

Transcript

Daniel:

Can you tell me a little bit about yourself and your background in product management?

Boaz:

Yeah, sure. I started in the Israeli Army, actually, where I used to work with Boeing and Sikorsky on helicopter’s cockpits and interfaces. I started over there for a few years, and then I moved into a different company called Elbit, a different company here in Israel, where I was part of a small R&D team–like not R&D as development in a start-up, but more like an innovation team, maybe is more proper–inside this company where I worked mainly with the crazy mathematicians people with all kind of complex algorithms for man-machine interfaces. That was a quite nice experience for about two to three years, and then I co-founded Bizzabo where I took the role of Chief Product Officer, and this is what I’m doing for the past four-and-a-half years here.

Daniel:

Great. Can you tell me a little bit more about Bizzabo and what you do day-to-day there?

Boaz:

Yeah, sure. Bizzabo provides an event success platform Internet software. The service, it’s a SaaS company where we provide event organizers – professional events organizers – we provide them a full stack of tools. We help them to manage all the digital aspects of their events, from website builder, to registration, services, selling tickets, event marketing, contact management. It’s a full-cycle… We’re actually moving them from Excel spreadsheets into a full software, full-service to help them manage all the things they need. A lot of reports, insights, everything to do the event.

This is what we are doing. We started, actually, with forming a networking application to solve the problem of networking in events, kind of like a B2C. As we evolved, we moved more into a B2B company where the native apps for the events are part of the full solution. They have a networking community, and a messaging interface, and the ability to set your own scheduler. We shifted from B2C to B2B, and we launched it about a year-and-a-half ago. We found the product-market fit after a long time. This is a good feeling, and the company is in the growth phase right now.

Daniel:

Great. Okay, so being in a growth phase, and also being in a B2B market, probably means that you get a lot of customer feedback going your way. What kinds of customer feedback are important to you, and how do they play into your work as a product leader?

Boaz:

This is actually a fun part when you hit the product market fit – and even before you actually have users that really use and utilize your product – so then you get the real feedback, and you should embrace that, right? We get tons of feedback, really tons of feedback, and we really try to be very, very approachable for our customers.

You have several ways to approach us from a simple email, to an Intercom message in the interface, to phone calls, and what else… Twitter, Facebook, of course, everything. We are monitoring everything, and we try to be reachable in every channel that is convenient to our customers.

We get tons of feedback. We try to make them transparent company-wide so everyone is able to see the feedback, so we have all the different channels coming into a support channel inside Slack so everyone can see literally every support call that is getting in.

I read every support email. Literally, every support email and every Intercom message inside the platform, and I get it daily. Literally going over every feedback that we can, and of course, we have a lot of feedback that is going through our Customer Success and Sales people doing calls where it’s more challenging way to get the feedback over there.

Daniel:

You go through everything, and do you have any particular mechanism to organize this, to keep a record of this, or is it more like in the moment that you’re looking at it that you decide whether it’s not relevant? Do you have any process in place for that kind of stuff?

Boaz:

Sure. All the inbound is just to get a sentiment, to get the feeling. Okay, it’s more like soft things. They’re very weird and suddenly in a certain week you get a lot of support calls on logging issues, so it pops up faster. “Hey, this week we got a lot of logging issues. Did we break something?”

It’s more like a feeling for this channel, but we definitely have processes to make it a more quantitative process, so the guy or the woman that is on Support, we are tagging every support call with different tags. Is it product feedback? Is it a bug? Is it feature-related? We tag every support call, and once every two weeks we get all kinds of graphs about which part of the product gets a lot of support calls or are we having a lot of bugs suddenly for the last release or something?

We also have a mechanism or process – more of a process – to try and collect the feedback from Customer Success. We’re actually divided. Let me go back.

We divided the feedback into three groups probably. Maybe it will be four, but let me think about it. Support is like… support. People that are having issues with support and – this is in the same group where people are reporting on our Customer Success channels. Every account has its own Customer Success manager, and Support and Customer Success are on the same channel. This is like… it can be product feedback, bugs… bugs are more for the Support, but product feedback, bugs, things that are not working, things that are hard, and this is for the actual user that uses our platform.

The feedback from Sales is managed on a different part of the product. It’s a different… I really believe it’s a different kind of feedback. Why you are losing deals, what will help you to close more deals, and this is like another channel where you need to balance, and they’re affecting a different part of the customer life cycle. It’s okay to not have any Salesforce integration. We are currently working on it, and we are losing customers due to the lack of of a Salesforce integration in the platform.

This is a good example, and we can decide if we want to build it to bring more business into Bizzabo. We can decide, “No, this is a clear–cut decision. We are not building it. I don’t care to lose hundreds of thousands of customers about Salesforce.” This is easy.

Sales is one channel, and Customer Success and Support is another channel, and this affects bringing more or new revenue inside the company, and the other is bringing or making our customers happier, and it’s a balance. Right now at the moment, we really care more about existing customers. We want our existing customers to be super-happy. I think that is changing, so it’s bringing more revenue and making the customers happy. Each time it’s different.

Daniel:

That was natural my next question which is, so you are deciding whether or not to do Salesforce integration or something like that, things that are coming from Sales. What’s your thought process, or what’s your structure, to try to decide whether or not to pursue that? Does that come from your strategy? How often is that set, and how can you try to decide on the spot if you are going to do that or not?

Boaz:

Building new stuff is also divided into several aspects. Big, new, values or features usually are aligned to our most strategic plan where we plan a quarter or two quarters ahead, and we set it according to the company strategy. The company strategy, the roadmap is built upon feedback from – we are asking our sales people, and we are following all the lost revenue that we had, and what will bring the most of the revenue inside the company, which features. We try to segment the lost deals according to the customer size and the time that we hear it. We want to build it for a lot of potential customers and to the customers that bring in the most revenue to the company.

This is kind of the segment that we do, and we build it into the product roadmap. If suddenly we are hearing a lot of Salesforce integration, and it’s only planned for two quarters ahead, we don’t care to miss the opportunities… or change the product roadmap if we think that we made a mistake in the prioritization or something and the market has changed, and we can play with the roadmap accordingly. It’s not happening a lot. We need a good reason to do it, and so we seldom do it.

If it’s small requests, we have a process where we call it “Product Perfect Day” where we are fitting in, on the fly, features or requests that are taking less than a day to do it. Sometimes it’s small tweaks, and we do it every second Sunday. (We are working on Sundays in Israel.) Every second Sunday, all the company stops the roadmap features, and we build to all the small enhancements, request, UI changes that are sales enablers, quick wins, or “product perfect” things. Bugs, changes in design, something that’s alone that we can’t prioritize them high enough. In this matter, methodologically, we can get them in every two weeks. It’s kind of a balance of small things and big things, the roadmap according to the company strategy, where we want to go, and stuff like this.

Daniel:

Right. Okay, so we’ve been talking a little bit about customer feedback coming in. What kind of feedback do you actively go out and seek?

Boaz:

For every part of the service that we are building, the product manager, which is the owner, he is building interactive mocks, and… Let’s start before it.

For every feature, when we start, there’s the market research and what we want to build and everything, you must talk directly with at least three to five to seven customers, okay? You can’t come to a kick-off or a grooming meeting without you talking to existing or potential customers. We don’t care, but you must get feedback and understand the market and hear them and know what they want, okay? This is in the grooming or brainstorming phase.

When we start to build it, we literally have between three to five iterations with customers about the feature design or spec, whatever you want to call it, and we change it from a call to a call, okay? You’re building your mocks. You’re showing them over Skype or Meeting or whatever, you get the feedback, you let them play with it, you change it, you iterate on design only. It’s fast, it’s cheap. Then we are able to release with a big or good validity, the feature that our customers need because customers already saw it, commented on it, and know what to expect.

Of course, after we release it, we also do it gradually, first for customers that were part of the building process where they know the features, and then we release it to some more customers, about 50% or 20%, getting the feedback, doing another iteration of fixes, yes or no, if we need if we want, and then doing the big release phase with all the marketing-related stuff and “how-to’s” and FAQs and stuff.

Definitely, I think that in the past probably eight or ten months, we didn’t have a feature that was released without being viewed by customers first.

Daniel:

Right. Let me unpack this a little bit. You were saying that you must have at least some customer and prospect interviews as you start to build a feature. How does that interview happen? Over which channel? What kind of structure do you try to follow?

Boaz:

Okay. Because most of the interviews are happening inside the video chats, probably Zoom which we are using, most of them are happening via video conferencing. It depends on the phase. Let’s say that if you’re doing grooming or brainstorming with the customers, you’re coming with a set of questions. You’re literally trying to be as quiet as you can, okay? You’re not allowed to try to navigate, and you’re literally in a mode where you are collecting. Collecting, collecting, and hearing in the brainstorming.

This is also happening in company meetings when you’re starting to do a brainstorming or grooming session for features inside the company, the product manager is not allowed to talk in them. You are just hearing and asking questions. “Hey, we are going to build Salesforce integration. What do you think we need to build?” That is what I’m allowed to say, and you’re only writing it down, collecting the feedback, and going after it to do the homework and try to consolidate it into a feature or a set of specs or something. This is… sometimes it’s very hard, but we are getting very good at it, you know? The power of silence.

😀

Daniel:

Right. How often does this happen? Is this a continuous process, or is this something that happens at some stage where you’re planning a bunch of features at the same time?

Boaz:

No, it’s continuous. Well, because you need to press schedule, or our customers are working with speed before their event. Probably one month before the event, you can’t talk to them. You’re not allowed to because they are laser-focused on the event, and this is super-crazy time, so we know when to schedule with each customer. You really want them to be in the right mindset where they are relaxed after the event, and they finished everything. It was huge success.

We really, really, really, really need to know when to approach who, and this is where we are using our Customer Success managers to make sure that we can. “Hey, I want to talk with this customer. Can I? Is it a good time?” And they can tell you yes or no, and this is their call. They’re literally managing… This is crazy in our industry, the stress level of the customers is peaking before the event. You can’t … Everyone is getting crazy two to three weeks before it, so we need them to settle down and then get the feedback clean. It’s a different person, you know?

Daniel:

Right.

Boaz:

It’s definitely ongoing. Every week we are trying to set… and even if we don’t have anything concrete that we can’t really build or need feedback, we just do a general… they call it a BFF. “Hey, let’s be friends and talk and get feedback.” Right now we’re in the process of building a… how they call it? It’s like a Bizzabo… not advocate, but kind of ambassadors, or there will be a defined group of customers that are design partners and early access customers, and they will get different kind of benefits for being part of it, so it gets them more committed and be able also to say loud and clear, “Hey, thank you. You are part of something special,” and get the relationships even closer.

Daniel:

Right, so this will be your ongoing customer council that you are resorting to, right?

Boaz:

Definitely.

Daniel:

Okay. You were saying that you do a gradual launch, and even the build and the launch phase is gradual as you learn. What kind of thing tells you that you’ve succeeded? How do you know that you’ve actually shipped the right thing, and what you shipped is useful to both the customer and to your original goal as a company?

Boaz:

There are two answers for it. The fun one is when you get the, “Wow! This is amazing!” from the customers. This is the first one, the soft one, the qualitative one, and this is the part where it’s fun.

The quantitative one is for each feature we ship. Before we start to build it, before we try to spec it, we are setting the KPIs for it. Let’s say that I’m building an anti-duplication mechanism to avoid duplicate contacts like you have in Gmail. This is something that we are building, so we want to be able to eliminate at least 85% of the duplications we find. We really try to… and this is a hard process. It’s taking time to find and define the right KPIs.

Let’s take Salesforce, for example, which is a very complex integration. We want to be able to provide, in 95% of the customers that needs Salesforce, the integration that they need. Like to be flexible enough to provide them to be able to connect. We don’t want a customer where we sell them Salesforce integration and then they can’t do it because he has some kind of crazy database schema in Salesforce. It’s not a success base of the feature. It’s a success base of the connection.

For each feature, we are defining one or two. That’s it. It’s something that we define before we start to spec because this is the success criteria. Then after a period of time where we define it, the goal like, “30 days after the full release to our customers, we want to hit this goal.” Then after 30 days, we are going, we try to do it, we get data.

Usually we find out new stuff that we didn’t think about. “Hey, we can manage this, but then they do, like, no,” and you don’t get the really crystallized answers but you get a good amount of understanding if you beat the goal or not, and then we need to decide, “Okay, are we doing any changes to hit the goal? Yes or no?” This is usually a hard question. Do we want to come back and iterate, or, “Okay, we know that we didn’t hit the goal, but it’s good enough for now, and let’s put it… Let’s do a small change or let’s check it in another 60 days,” or something like this. “Okay, let’s wait a bit for a bit more feedback, and then we will do the second phase.”

For Salesforce, for example, we said that if we are not touching or rebuilding part of the integration in the upcoming two quarters, we succeed. We will know that we need to change it and reiterate and build some more capability in, but we want to do it on Q4. We don’t want to get it in now. If we will release it, and we will discover in about a quarter that we need to build another two to three weeks of features, we made a mistake on the initial delivery.

It’s really some quantitative data, but it’s defined very well, and we really try to avoid soft KPIs like, “Reducing the amount of support tickets on this feature,” is not acceptable. This is not a good KPI. We try to avoid it. We want to deliver value. We don’t want to do something that you can’t measure because if the Customer Success tells you, “Hey, I helped him five times,” this is like a gut feeling, and you can’t… you never really… or we currently can’t really measure it to be precise. Did we eliminate the support calls, or did we not?

Daniel:

Have you ever managed to take out features from what you learn, or is it something more of an adjustment that you make in the future?

Boaz:

After we release them, or all features, or …?

Daniel:

Exactly, like a feature that you’ve realized, “Yeah, this is not hitting the mark, and we cannot … maybe we don’t want to support this anymore,” or something like this?

Boaz:

Yeah, we’re trying actually to… Usually things that we did a mistake when we decided to develop them because it’s that we can’t support them anymore because, I don’t know, the structure of the sale base is different, and we’re opening a new market, and over there it’s making problems, and maybe sometimes it’s a feature that’s making our organizer confused. We definitely try to eliminate them.

It’s always hard to eliminate features because when you have a lot of customers, someone is using it, and you need to make sure that you are not… Like a gradual fade-out, usually. We are trying to not open it to a new account and for organizers that are currently using it, we are keeping it live, but for their next event, they will not be able to use it usually.

Daniel:

Right.

Boaz:

We’re trying to gradually fade out features.

Daniel:

Yeah, that maps into something that I wanted to ask which is… You just mentioned that you try not to talk to customers before their big events, and that means… Do they get access to new features as the new event approaches, or do you try also to freeze their accounts for awhile?

Boaz:

I haven’t thought about it. It’s an interesting one. Right now they are getting it. We try to deliver… We really love to ship out, okay, so we ship out as fast as we can. No, currently we are not dealing with it or stopping features for customers that… This is an interesting one. I need to write it down.

😀

Daniel:

Okay, so this issue is related to something that is really important which is segmentation. In what way do you segment your customers and your users in a way that affects customer feedback and your analysis of it?

Boaz:

Let me think about it for a second. We’re trying to make the product success tied to our Customer Success. We really try to align the interests. The Customer Success is defined by a successful event. If the customer made a successful event, so the customer is happy, and we tie the product to be successful by making the event of the organizer successful.

Each customer have a different definition of success for their events. For example, some of the conferences are only defining success by revenue, right? This is obvious. Some of them are free, so there’s defining success as attendance. How much people are coming. Some of them are defining it as engagement. If you’re doing a user conference or in-company conference, corporate events, the amount of engagement of people inside that are going.

Some of them will define it by sponsor’s value. If my sponsor is getting the value, this is a success, and I don’t care about anything else. We’re trying to segment each of the events. We’re asking the customer, “How do you define the success for this event? If it’s a registration, tell us how many registrations you want to hit.” We try to help them, and then the feedback is according to the type of event, so it’s not the type of the customers because customers can have different types of events. It’s kind of complex.

This is one segmentation when you try to understand usually which kind of event is it, what kind of conference is this that he’s doing? The other one, of course, is according to the customer value. If the customers are worth a couple of thousand of dollars or a couple tens of thousands of dollars, so also over there we are doing segments. It seems obvious the customer that brings more money are getting more weight to their requirements, and we’re in the process of going up-market all the time.

Daniel:

Okay, so that’s a conscious decision on your part. Sometimes people try to balance the big customers from the smaller customers because the goal of the product is to advance on all fronts. Are you consciously trying to move up-market, and that’s why you try to listen more to that end?

Boaz:

Yes, right now because we… The product evolved a lot, and we understand that the value is much higher to the higher market, so we started actually with the long-term customers like sole event organizers that are doing small events, and we kind of moved up to small organizations, and right now we’re addressing the mid-enterprise level of customers. This is why we are doing Salesforce integration. Salesforce integration is definitely for bigger companies and not for the sole event organizer.

We are prioritizing this according to the company strategy, and the strategy is to go high market, and sometimes we are cutting the customers on the end. The product is not a fit anymore for someone that is doing kind of like meet-ups every month, a small meet-up of 50 people, is not now the ideal customer profile that we want to hold in the system, and he’s feeling it. We are literally ready for him to churn out in order to get the bigger fish.

Daniel:

Okay. I wanted to move on to another part, we already talked about it a little bit, which is your interaction with customer-facing teams. We’re talking about Sales, Customer Success and Support. There’s usually an important tension there which is the input that they’re getting from customers somehow gets filtered into what gets to the product team. How do you set up this cross-functional interaction in a way that is more productive for you?

Boaz:

It’s definitely hard. It’s also hard to the customer, I think. We try to get the customer to really know his Account Manager. He should be his point of contact, and he usually should only talk to one person in the company. We think it’s more personal, and it’s more convenient, so usually we try to give all the communication to the Account Manager.

How do we hear the feedback? We record all the calls, so we literally can… If some account manager tells us, “Hey, I heard da-da-da-da-da,” they send us the recording, and we can actually listen to the recording. They are doing very good work in documenting the important calls. A lot of the time we are joining them for periodic calls about feedback. We are doing it proactively, so this is like the BFF calls.

The product manager can and should sit on these kinds of calls, and as I told you, we try to get this kind of feedback so that transparency of the… everything, and it’s the Slack channels, and getting all the unfiltered data. This is from the Customer Success and manager’s aspect.

From Sales, we usually… just a second… Also for the Customer Success we are doing the weekly meeting, Product and Customer Success, where we are just sitting and talking about this week’s issues and what they heard about and stuff like this. It’s a bit informal, but it’s structured for once a week.

For Sales, we are getting all the lost deals reasons inside Salesforce, so it’s easy to filter out according to the deal size.

Daniel:

I’m sorry. I wanted to try to focus on the other side of the conversation which is does this usually have to ask a few more questions in order to understand really the problem the customer is having. Are Customer Success teams doing that directly, or is the Product team involved in that process, or is the Sales team empowered to try to ask the type of product manager-like questions to customers, or is it something that PMs do directly?

Boaz:

Okay, perfect. Usually if it’s something… It’s a bit of everything, okay, and it really depends on the balance. If it’s something that gets into a Product feedback or Product needs, so the Customer Success manager will set a call of the Customer Success, the Product person, and the customers together will sit on a call. We are also guidin … usually the Sales together, ask for data if we need to, so if they hear someone that is interested in Sales Force integration and we are currently working on it, they usually will send us a note, “Hey, I just talked to a customer about Salesforce integration. I asked him the questions that you needed me to ask,” and if it’s a customer that we can… Even on the sale cycle, we can go on a call with him and talk with him about his needs, and we tell them, “Hey, we are building it right now. We want to make sure it will fit your needs when we are building it.” This is his interest, so we are doing a direct call between the product and the lead, meeting with the customer.

Sometimes we just need a general data collection, we will send a survey or email message, or just tell our Customer Success, “Hey, please ask each customer that you are talking with which email marketing tool he’s using.” We want to know which integration we need to build first, so we just pull a simple Google fFrm, and with each customer, they’re asking enough that we have a pool of hundreds of answers usually, and you know quite easily which one you need to be …

Daniel:

Okay. You were talking about doing surveys and that kind of thing. Do you have room for quantitative methods during your discovery and build or delivery stages, or is it mostly qualitative feedback that you use during that part of the process?

Boaz:

I think it’s more qualitative right now, but we use quantitative data to decide what. When we are building, it’s more of a qualitative, but what to build is more of a quantitative. If we decide to build integrations, to decide which integration and what to build, we are doing it quantitative, okay? We want to build Mailchip integration, but when we are building the Mailchimp integration, we are talking with five, six, seven event organizers to understand how they are using Mailchimp to understand which integration we need to build.

Over that is on the PM the decision of, okay: “I understand usually tons of ways to build it. This is where we are going first. It will answer part of the customers, not all of them, but this is a gut feeling that it will be the best first step to go.”

We know that we’ll need to do a second phase, or we will release it and then we will get tons of feedback from current customers that will be, “Hey, but I’m using it like this,” so we’re trying to build a base, get feedback, and then build upon it the right one.

Daniel:

All right. As we’re heading up to the end of this interview, I wanted to ask you about some bad experiences that maybe you’ve had when dealing with customer feedback. What kind of things do you recommend for people to avoid doing?

Boaz:

Bad feedback is always coming, and what we are usually trying to do… we’re trying to learn from it. In real special cases that are really unique … Let’s say that you sign up with a big customer, and after two weeks he is freaking out and he’s churned. Usually what we do, we leverage it cross-company, and we are building a dedicated team to learn from this process, so they’re collecting all the communication points surrounding the interaction from the lead, to the sales call, to the onboarding session, to his flow in the system, trying to map everything to all the support tickets.

Usually it’s a lot of interaction between all the people, so they’re collecting everything into a very nice case study. They’re doing a feedback call with the angry customer, and we really believe that the customers are appreciating it. We tell them, “Hey, we fucked up. Literally, we fucked up. We sold you something that does not fit. We did a mistake. We don’t want to do it again. Help us learn,” and usually they are very happy to help and to provide feedback, and they appreciate it.

Then it’s like going into a cross-company presentation with tasks and to-do’s that are getting into the war plan where we are deciding what we need to fix, and sometimes we get things that are… We promise them something that we don’t have, or the product is not working as they need, or they are not the ideal customers or I don’t know… There are thousands of things. I think that you really need to embrace that feedback. This is the most significant learning curve that the company can have, and you need to take them and to understand, not to throw them around and, “Okay, he’s just a crazy guy. We don’t want to talk to him anymore.”

Daniel:

Right. As a wrap-up, what would your top recommendations be for product managers that are trying to use customer feedback throughout their development cycle in an effective way?

Boaz:

Number 1 is to talk with customers. This is obvious, but when we’re interviewing here, product managers, if they’re not asking me, “Hey, are we going to talk with customers?” they are not going to get hired. This is one of the first questions that every product manager needs to ask when he’s interviewing into the company. “Can I get direct connection or direct channel to customers?” If the answer is no, so the company is not heading… usually, the company is not heading into a good way.

We definitely need to talk to customers, and I think the other way around is to listen. It’s not only to talk, it’s to listen, to be quiet, to hear what they need to say, to hear what the company has to say, to understand everything that they stated, right? To understand all the feedback, all the needs, and not assuming that you understand everything.

With an exception, usually you know where you want to take the product. The customers usually don’t know to envision the future. Even inside the company, not everyone can envision the future. This is part of the art of being a product manager. You need to balance, and you need to be able to take some kind of moon shots, and you need to be ready to fail, right, because it’s only gut feeling. If you fail, stand behind it. Tell everyone, “Hey, I thought it would be X but it’s going to be Y. I’m mistaken. No problem. Let’s kill it. Let’s fix it. I don’t know what.”

It’s kind of this three part of doing the research and understand what you need to build where, when, and why.

Daniel:

Perfect. Okay, so thank you very much for your time today. It was really fun talking to you. I hope it was also for you.

Boaz:

Yeah, definitely. It’s always fun to think about all the things that you are doing daily, and you don’t get usually to deep-dive inside why and how.

 

Enjoyed the article?

Get actionable, useful content and resources on Product Management. Delivered straight to your inbox for free. You will also get in-depth guides to:

  • 20 Product Prioritization techniques (44-page PDF & cheatsheet)
  • The Kano Model (40-page PDF & spreadsheet)
No spam, ever. Unsubscribe at any time. Powered by ConvertKit