Conversation with Ellen Chisa

Conversation on customer feedback with Ellen Chisa, Founder in residence at Boldstart. Previously co-founder of the Dark language and VP Product at Lola (at the time of this conversation)

Published

Key Points

  • Leveraging people close to you in your research
  • Getting people from your target user audience to participate in research that takes a non-trivial amount of their time
  • Asking non biased questions to users
  • Balancing qualitative and quantitative metrics, and knowing where to focus on
  • Usage and other important metrics to track
  • Usability tests before launching
  • The importance of looking at Customer Support data and getting customer-facing teams to think more like PMs
  • Segmentation based on behaviors
  • Getting around not having access to customers

Transcript

Daniel:

Can you tell me a little bit about yourself and your background in product management?

Ellen:

Hi, I’m Ellen Chisa and my background in product management is I actually started my career in product management at Microsoft. They called the title, Program Management there but it’s basically the same thing. And I went straight into that from engineering school, and I just always really enjoyed the part of the project that was kind of figuring out what’s actually useful for someone to use rather than what’s cool to build technically. So product management felt like a natural fit. And then after that when I left Microsoft, I wanted to go somewhere a little bit smaller where I could have more impact.

So I transitioned to Kickstarter when there were about 50 people there. And I was the product manager for the Backer Experience which was helping to figure out why people were backing projects, what made people wanna support creativity. And then another really important part was the company felt everyone was inherently creative, so it’s how do we get people who backed projects to be starting their own projects and doing their own work? And then, when I left Kickstarter, I went to Harvard Business School for a year to learn a little more about the money-making side of products because I knew a lot about the consumer-side and the technology-side. And then now, I’m the Vice President of Product at Lola Travel. And so my role there is I help coordinate the overall product strategy with their executive team and our founders and then, I also just still do a lot of the day-to-day product work myself with our design and product team.

Daniel:

Awesome. So in broad terms, what kind of customer feedback do you see can… How important is it to your work as a PM?

Ellen:

It’s actually… I think it’s the most important thing. So I did a project with a friend at HBS where we tried to go through and build five products in five days or five startups in five days which was really fun. But we had this fundamental tension where I believed you couldn’t make a good product without going out and doing the user research and spending the time doing the qualitative interviews to understand what the actual user problem was. And she was willing to come up with a hypothesis, and then just test it and see how it went. So it was kind of this tension of, “Do you have to start with the users or don’t you?” And I always err on the side of, “You should definitely start with the users.”

Daniel:

Right. And how do you find out about new things and new problem areas that are important to your users and customers? I mean from an existing product standpoint.

Ellen:

Yeah, I guess it’s a little easier for me right now because we’re so early on, but since I’m working on travel right now, that’s a good place to start. And with travel it’s nice because you can talk about it in any conversation. I can come in and I can talk to you in this interview, and ask you about the last trip you took or the next trip you took and kind of start to hear about your journey through that process. And people really like telling you both about things that were really awesome about their trips. And people kind of, I think, like the mutual complaining about talking about the parts that don’t work so well, like you’re in the middle seat in the back row of the airplane and the TV is broken and everything’s awful. And so you can learn a lot pretty quickly just by getting a conversation started. At Kickstarter, it was a little bit different, but I could at least people had heard of it. And I would see kind of their misconceptions if they’ve heard of it, and not been involved with the platform before. Or if they had backed something, I could talk through their experience with having done that.

Daniel:

Right. And how often would you do this? Daily? Weekly?

Ellen:

Probably daily. I really like it and I enjoy talking about what I work about, so it tends to come up and so I end up talking about travel a lot. At least weekly, but probably I would say daily. And then. I try to do it more formally, periodically, too. So when we launched the first version of our beta product this time around, I had 10 people in. And it’s not quite usability cause we weren’t at the point where we were optimizing anything or making sure people understood the flow. It was really much more like, “How do people react to this product?’ 'How does this fit with how they plan travel now? What do they think about it?” And so we had 10 people in that first week and have them go through the entire process of booking a trip.

Daniel:

Right. That’s interesting. So let me break that down. I mean in terms of interviews, single interviews, what kind of questions do you ask and how do you make sure that you stay out of the process and don’t lead users towards some preconceived idea?

Ellen:

Yeah, I definitely… It was interesting being at HBS last year because they were much more of a school of thought like that you should go in and say, “How much would you pay for this product?” Or they would go in and say, “Would you pay for this product?” Or “Would you use this to fix your problem?” And I am very much… I never want to tell the user exactly what’s going on until the end. So, I would rather start with a broad question. So, “Tell me about your last trip or tell me about your next trip,” and then kind of dive into the details there and figure out what stands out to them, and what’s interesting about it. One of the things I’ve definitely come up with is a big part of people’s trips is they like telling other people about them after, so they won’t just tell me about their trip, they’ll also tell me about telling other people about their trip, which has been interesting to see. And if I had just started asking questions of planning, I never would’ve necessarily got this perspective on like the trip retrospective or how people think about sharing what they did.

Daniel:

Right. So on the other side, you were talking about this new beta launch and you had 10 people coming in and giving you some sort of feedback. What can you share about that process?

Ellen:

Yeah, so what we did was we had a bunch of early users who are all friends and family and I asked… This obviously, it’s not a completely unbiased sample set, but we’re so early on that it’s interesting to see how anyone uses the product and who resonates with it. So I asked our employees if they had any friends or family who they knew had to plan a trip, to get them to commit to coming in, and planning the trip with us while our product and design team and some other members of our engineering team watched. And so, what we did was we have a contract. It’s a light-weight contract but it just explains the process which is, “We’re not gonna help you as you go, we want to see how you react to the software.”

Ellen:

We’re not trying to be mean, just so they’re a little bit clear, when they ask us a question like, “Can your software do this?,” and we’re like, “I don’t know. Maybe.” That we’re not being rude, and we’re just trying to see what the full extent is. And so we did is we would bring them in, and we would put them in a room, and we would just let them start planning their trip using the app. And we would see kind of how they reacted. And one of the things we learned really early on was that since it was a communication tool, they always needed to know the status of the travel agent that they were talking to. And they felt very concerned if they’ve sent a message and they didn’t hear anything immediately like, “Is the travel agent still listening? Where did they go?” And we learned that people… When you have a communication platform, people really need to constantly know the status. It’s a good usability principle in general, but we didn’t realize how much it was going to play into that specific conversational dynamic.

Daniel:

Right. And so, you mentioned that there were friends and family. Do you have any experience around bringing people from outside inner-circles that may be harder to convince to spend some time doing some tests and that kind of thing?

Ellen:

Yeah. I’ve definitely done it other ways, too. So I did one research project it was on people who have graduated from college and in their first year of work. And I knew some who were my friends, but I felt like that wasn’t necessarily a representative sample. I went to a tiny engineering school, so everyone I knew had been to this exact same environment, like had a job and, was super happy. It was a very bad sample set. And so, what we did is we actually kinda created a screener, and put it on Craigslist, and screened for some specific attributes with the types of jobs people are in to make sure we had variety, how long ago they’d graduated from college. And the way we incentivized those people was actually with money, so we paid them I think $200. And the only thing they were obligated to do was they had to come in and do an hour long interview with us individually, and they had to keep a journal for a week about a few specific things. And then they came at the end and did a focus group, which let us see kind of how they thought abut they problem over a longer time, what they said to us individually, and how their opinion changed when they were sitting a room with other people.

Daniel:

Right. Awesome. So say you find out about a new opportunity, a new problem area, for a small segment of your customer base. How do you go about trying to test if it applies to a broader base of your users or even to prospects?

Ellen:

I’m trying to think of a time that this has happened. I think it probably would have been most likely to happen at Microsoft. Okay. This is one I like. 😀 When I worked at Microsoft, I worked on Office Mobile, and one of the things that they’d done for mobile, Windows phone had come up with this concept called, “Panoramas” or “Hubs” where you would have a set of applications that were related all together in kind of one larger application, and you swiped through multiple screens within the same application. It’s a pretty common UI paradigm now, but at the time, it was like new and people hadn’t seen it before. It was the one where they had the giant words that stretched up across the top to indicate that you need to swipe to the next screen. And when I started working on Windows Phone… This isn’t great. This is customer research on myself.

I’d switched from Android. And I was trying to cook dinner one night with my boyfriend, and we wanted to make a shopping list. And I pulled out my phone, and I went to “N” in the applications list. And there was nothing that said, “Notes.” And I was like, “Who makes a phone that doesn’t have a notes application? Why does Microsoft do this? Of course, no one uses this phone. It’s missing all this key functionality.” And of course I said this out loud, and Tom, my boyfriend, just stopped and looked at me, and said, “Ellen, you work on the Notes application. It’s in the Office hub.” And so I figured out that I wasn’t using this application at all because it didn’t match my mental model of where it should be.

And then, I started testing that with other people who had a Windows Phone and being like, "Okay. If you wanted to… " I would just get them. I would like, “Hey can you write this down for me?” And I would see what they would do and a lot of them weren’t ever going to the OneNote application. They were sending emails to themselves. They were sending texts. They were doing lots of other things that didn’t match with the note taking behavior in a way that I didn’t see for people who had phones on other platforms. And at that point, I started thinking, “Hey, is this really important internally?” And then, I went and I talked some with the OneNote Team. They would also… They also wanted to put their app at the top level, rather than being part of the Office hub. Because they had some UI navigation paradigms that would’ve done better in a stand-alone app, but they didn’t have a compelling reason from the user side for why that would be better that was convincing the Windows Phone team and kind of the inside of, “No really, no one knows there’s a note taking app on this phone,” kind of helped to move that along.

Daniel:

That’s interesting would that happen for users that were coming from different mobile platforms or even to users that were completely new to smartphones? Because in that case, I would say is it something that is natural for anyone that hasn’t ever dealt with a smartphone or is it something that might happen if you’re switching?

Ellen:

That’s a great question. I never checked that because this was in 2010, 2011, so by that point, a lot of people had smartphones, and particularly people who were around me in Seattle definitely did. I would be really interested to see what that answer is.

Daniel:

Yeah. Sure. I’m just curious around that. So let’s move on. Okay. One of the major issues that we have to deal with is we maybe find out there are a few things that we can work on, and how can we try to size the opportunity for each and try to decide where we’re going to bet our work on? Are we doing A, B, and C? Are we doing A and D? So what kind of process do you follow around this?

Ellen:

Yeah. So I think of it in terms of there’s four possible inputs into the process, so there’s the qualitative user feedback that we just talked about a lot, which I rely on the most heavily. Then there’s the quantitative feedback of the metrics and how people are actually using your product and what you’ve been seeing with data, in terms of trends. And then, I think of the third as being the vision of the company and the founders, and I think the first two can sometimes only get you so far. When you think about pretty major companies, they have to make these leaps that a user’s not necessarily gonna know exactly what they want or that they want this great new thing.

And there’s someone out there who has this vision for where we’re eventually going to get to. And I think sometimes you invest in things like that, and I call that the “founder vision” part of the product. I think a lot of early stage companies, in particular, rely far too heavily on that. I don’t think that’s not a cop-out for not doing the first two really well. I think it’s just something that you can take into account, especially as you’re further along. And then, the fourth one that I think a lot of product managers don’t look at, but I found to make teams a lot more successful, is what the team actually wants to work on and what the team is interested in. And so, it’s one thing to go through a formal user discover process with just product and design, but when you think about it, everyone who works on a product tends to have some things they care about, either because of how they use the product or because of other people and this works better at smaller companies.

For instance, one of our iOS Engineers had built the functionality where you can take the photo of your credit card instead of having to type in the number into our application. That wasn’t something big enough for me to put on our roadmap especially at an early stage but he’d done it before and he knew how to do it. He knew he could do it with basically with no additional time since he was already building the manual entry form. So, he just went for it. And that’s the sort of thing that I think you can take from your team even if it’s not necessarily something it’s pointed to.

So I guess to go to the question, I kind of try to balance those four things and figure out which of these things makes the most sense and which have we been relying on the most heavily, recently. And so right now, I’ve been spending the most time definitely on that qualitative user feedback and seeing how people are using our product. We have a pretty substantial data team but we have one engineer, in particular, now who’s looking more at how can we start seeing how many people are booking trips per day, how long is it taking us to get those trips booked, how many things are they booking. And I wanna start looking more at that data to see, “Okay. We know this is working well for some customers but is there some numerical way that also correlates with what type of customers those are, and is that an area we want to explore more.” So, I’m gonna rely more for the next set of things on that quantitative analysis.

Daniel:

Alright. Beyond usage metrics, what other metrics do you try to track? Does NPS make any sense to you? Does some other survey mechanism or are you just focused at the moment or in the past on usage and how that reflects on the revenue and other kinds of metrics for the business?

Ellen:

I think I tend to focus… I’ve never been a huge fan of NPS. I think it’s a tool that you can use but I don’t think it’s always the best tool. I definitely think a lot about usage and then, the other one I’ve always found to be… So usage but, in particular, repeat usage. A Kickstarter I cared about a lot about the repeat backer so someone who’d backed a project understood the platform and then, decided to back another project. At Lola that’s more of someone who’s booked travel with us and books a travel with us again. The average person does not book a travel very frequently so, that’s a little bit, not harder to track. We know when it happens but it’s hard to know when it’s going to happen because you might only book a trip every six months, every three months, even every month if you’re a frequent traveler. So, we don’t necessarily know immediately if we’ve stuck with you or not.

And then, the other one I really care about is referrals, I think, especially at the beginning of a product lifecycle. It’s much better if you have someone who’s wanting other people on the product. So a woman I met, she works at Quora, got in to our private data when I reached out to friends and she booked two flights yesterday. And then, she reached out to me and was like, “Oh,” in the hotel. It’s great when people are able to book multiple parts of their trip. And then, she reached out to me and said, “Oh, can I get my boyfriend and my other friend who travels all the time into this? This is great. I really want to be able to use that tool.” And then, that’s neat for me because my users are helping to qualify other users who would be people who would really like this product.

Daniel:

Right. You mentioned a couple of metrics there that are very product-specific. It plays to the mechanics of your product. How did you find out about those? Is it something that you learned through qualitative research and you say this is the pattern of usage that we would like to see and then, figure out a metric that can reflect that? Or, is it something that you discover, somehow, in some other way?

Ellen:

I think you can do it one of two ways. I think, as someone who really likes qualitative research, I trend towards matching the pattern and figuring out how I can numerically map that pattern. I think the other way you can do it is figure out what thing moves your financials and come up with a metric that correlates more closely with how you make money or not. The thing I don’t like about metrics is I feel metrics, especially as your company gets larger or if you tie people’s personal assessments to metrics, is they start to try to game the metric, and I never want that. I want people doing what’s best for the business, not trying to game a metric to make it look a certain way and I think you have to be conscious of that when you’re deciding how to use them.

Daniel:

Cool. Yeah. Okay. So, we already talked a little bit about this but as you’re building the product, what kind of input do you seek from your customers? Is it usability? Do you show prototypes at some stage? What kind of process do you try to follow as you’re trying to build it to make sure that it’s built right?

Ellen:

Yeah. We definitely do usability. We definitely do the qualitative testing. We haven’t done as much with prototypes lately but I think it makes the most sense to build a prototype if it’s something that’ll be a substantial investment in time that you can’t figure out from something that’s less work or if there’s not that much engineering work. It doesn’t really matter if you do it or not. So I don’t think we’ve gotten to the point where there’s something where I feel a prototype would save us a large amount of time. We have done some internal prototyping especially for motion studies. Putting motion into apps is often very complicated but one of our UX designers actually does quite nice motion design and he’s done some experiments on brand feel and other things for like this are how our motions should feel and in that case, he prototypes and that’s been really helpful.

Daniel:

Cool. Have you done remote-usability testing? I’ve heard mixed opinions on that because you don’t get the visual feedback from people or a live comment when someone is co-located with you.

Ellen:

I haven’t done it yet. I definitely, I try to get sessions recorded so other people can watch them after the fact. I have nothing against doing remote usability testing. I think it’s just because I’ve done person usability testing that feels more achievable to me but I really should spend more time researching it. Someone recommended to me, recently, a tool called dscout, which is a remote usability testing that allows you to track a person over a time so it’s more of the journaling method of the analog usability and I’m really interested in that so I was gonna explore that soon.

Daniel:

Okay. Interesting. Let me move on to when the product is running. It’s being used and it’s producing a lot of quantitative and qualitative data. How do you decide where to focus and where to look at?

Ellen:

I think it’s what’s gonna move the needle the most. So, what’s going to make the biggest difference for users. And I think, there’s a bunch of different angles you can go with it. I think that’s the big vision part. Like, once something’s working, you can either do the user interviews and figure out what users want next, or you can say, “This is our vision as a company. This is where we’re going to go next.” Or, you can look at things that are broken. I really hate rough edges in products. One of my favorite things, I did at Kickstarter, was I reordered the navigation items in the top navigation menu. Because it was like, you clicked into one and then the navigation menu on the sub-page didn’t match the global navigation. And it drove me insane.

😀

So, fixing things that are inconsistent, or buggy, or just like, well, the details didn’t get hammered out in the crunch for launch. Or quantitatively, if you think you can optimize something. And I think it kinda depends on what stage you’re at with your product. If you think about some of the big, well-established companies. Of course it makes sense for TripAdvisor, Amazon, or Facebook to be optimizing something, because they have enough users to be able to do that pretty quickly. In my private Beta, optimization is not even on the radar. It’s not something anywhere near where we are right now. We could make a much bigger difference by adding new features or talking to users than we can by being like, “Let’s optimize our little checkout flow.” It just doesn’t make sense for us right now.

Daniel:

Right. When you were on Kickstarter, there were already a lot of users, right? It’s a huge platform. And what do you do with unsolicited qualitative feedback that might come in?

Ellen:

I think one of the things I’ve learned over time is that often, unsolicited qualitative feedback is not what the product manager’s thinking about at all. Like, yeah. At Kickstarter, a lot of it was people who really just needed advice for their project and wanted to talk to our community team. And they might request something as feedback, but, at the end of the day they didn’t… They would ask for FAQs and we had so many FAQs. If you go to the Kickstarter website, there’s so much content about what a project is about. In my mind, there’s probably almost too much. There’s a ton. Because we really want to reassure people. But at the end of the day, people would ask for things like that not because they really needed another FAQ, it’s just like, they wanted to talk to a human.

Daniel:

Right.

Ellen:

And that was what they needed to do. And so, I think I found that over time. I’m usually in this world of, “Oh, this is going to be the future,” or “Oh, here’s this nitty gritty product problem.” Like, this little navigation that is inconsistent. And no user spends all their time thinking about how their navigation is inconsistent unless they’re a product manager.

I just went on vacation with my family, and we went scuba diving. And my dad got these brand new scuba diving computers from this company Aeris. And they’re new. And to me, they feel like beta software. Because I’m like, “Well, you have this apps menu, and you have these settings menu, and this is why it’s inconsistent, and this is why your buttons should be different. And this is why you need to have the back button be the same every time.” And I’m thinking about all these super detailed things about the UI of this dive computer, and no one else is. Everyone else in my family’s like, “Oh, there’s a screen. This is way better than the old dive computer.” And I’m like, freaking out about all this information architecture. So I think I’ve learned that a lot of times, the place to go when you’re trying to get feedback on what you’re working on that’s like, next level, isn’t necessarily the qualitative feedback that’s coming in unprompted, that’s feature requests. And those are kind of a completely different signal that I analyze in a different way.

Daniel:

Right. And how does that work for you? How do you analyze that signal?

Ellen:

I really think product managers should do Customer Service. I think you should definitely learn how. I think a lot of onboarding for PMs should be figuring out how the Customer Service team is trained in doing that, to understand how people are actually experiencing the product. And so then, I think the important thing to do for that is to start kind of bucketizing it and deciding what you can address and what you can’t. So for me the concept of, “Oh, if people at Kickstarter want to talk to a human.” There’s things we could do to make that more technologically enabled, but that’s not really what the company’s about. And we have this whole staff of humans who’s ready to meet in person, and talk on the phone, and works with all these people, and you can just write to them.

Ellen:

They’re already pretty accessible. So that might not necessarily be the thing that makes the most sense. That would be a huge change. So that’s a bucket where I’m like, “Okay, this would be a big investment.” And they did end up coming up with a product that kind of works like this. It was called Campus. And it was a way kind of for creators to talk to each other about their projects and get resources together, so they could talk to a human who wasn’t necessarily one of the small staff we had. And that eventually happened. But since that wasn’t necessarily the project that was in my domain, as someone who’s working on the backer experience, I would set that chunk aside. And then kind of go in and be like, “What do people ask for on the discovery side?” And hands down, what people wanted on the discovery side, was they wanted the ability to say, “See art projects in Seattle.” Or like, “Food projects in Nashville.” And they wanted this ability to find what they specifically were looking for in their area. And we didn’t have that. Because you could look at all food or all Nashville. But not food in Nashville.

Ellen:

And when people said that, I understood what… They didn’t necessarily only want the site to have food in Nashville. They wanted this ability to look for things in a more granular level. And we got a similar set of feedback from creators, where they wanted to be able to look at projects like theirs. To see like, what went well and what went badly, and how they could learn from them. And so that was part of what led to us building the tools for advanced discovery, where people can actually go through and do all of that now to a much more detailed level. Where they can find projects in Nashville for food that raised less than $5,000 and reached their goal, for instance.

Daniel:

Right. So one of the things I see happening is that Customer Support, Customer Service, usually needs an ongoing conversation to go from the feature request and to try to understand how the underlying need actually is. And you talked about, you found out about the ability that they wanted on the product. Which is a representation for the need. And as the product manager cannot always be doing Customer Service and looking through that, do you get Customer Service to be more PM like? Or, in some way?

Ellen:

I’m a huge fan of getting teams to be more PM like. So what I did recently is, so we’re building this internal tool. We have travel agents in-house. So in addition to building the private beta of the iPhone app that our users use, we also have this entire web console for our travel agents. And so, they were filing bugs but they’re trained as travel agents, they’re trained as customer service people. They’re not trained in the rigors at bug triage. So I sat down with one of our travel agents and I… They’re very smart. They do a lot of service so it’s a similar personality type to the customer service people that I’ve worked with. So I sat down with one of them and I kind of just walk to it through how I would report bugs and how I thought about bugs and how like repro works, the reproducibility of a bug and I was like, “Okay, if you find a bug try to do all these things and then test it against this Dev build and then test against this and then file the report.”

And then ended up saving a bunch of engineering time, not just because he’s able to do that with the bugs, but by him doing it he’s slowly taught the other travel agents how to do that as well. And they’ve started picking up those good habits. I’m not saying that everyone’s an expert bug reporter now but it’s definitely like we’re moving in the right direction and that’s been really productive.

Daniel:

All right. So when customer requests come in and they usually require some sort of answer, do you have a system for that? I mean do you follow up with all of the customer requests that might come in, is it something that the PM does? Is it something that the customer service team does? Or how does that work?

Ellen:

I think it depends on the customer and how it gets addressed. So if it comes in, in companies I’ve worked with that had a customer service channel and a customer service function. The feedback was handled by customer service and they responded and they would either… I would go through look at the tickets that they’d tagged that way or they would tell me about it and give a summary as we work together for longer and they started figuring out what I was interested and why. And actually, one of the customer service people at Kickstarter started pulling data for me on the things he thought I would find interesting and he ended up moving into a product role after I left there, which is neat to see.

And then, the other side is like you get all… Especially when you’re early on like where we are now a lot of our beta users or people who know me so they’ll just email me personally and I follow-up with all of those people.

Daniel:

Right. Okay. So, another thing is all these qualitative data that comes in and do you have a personal system to organize it to search through it? Do you go back in time and look at things from the past or is it something that you kind of discard over time?

Ellen:

I have a pretty good memory, so I think what ends up happening is I start to build a system in my mind based on what all the feedback is in. So I basically at this point, my roadmap… Roadmap’s a pretty rough word for it, I just have this Google doc. It’s like all of the things I might like to build someday. And then I also have more rigorous roadmaps for the short term that’s coming up soon. But I have to, “in five years here’s all the random stuff we’ll have in our product that will be cool.” And I kind of start to build a system of what direction we’re going in and then when we start thinking about building a specific thing I think back and I’m like, “Okay, who wrote about this or what did they say about this.” And then, I’ll go back and dig through either the emails or the tickets about that.

Daniel:

All right. Okay, so let’s move on to a different subject, which is how do you try to segment your users and your customers in a way that it’s relevant for the feedback that you’re getting in?

Ellen:

Yeah, one of the things I feel pretty strongly about, is I’m not a huge fan of demographics segmentation. I just don’t think that’s necessarily… I think it’s sloppy almost. I think it’s like an easy way claim like, “Oh millennials, I’ll do this,” or like, “Oh old people, I’ll do this.” And I just don’t think it makes sense. And so I try to segment much more based on behaviors. So for instance one of the things I think about as we’re working on a mobile product and part of this because there’s been a shift in the landscape where people are planning on booking more and more travel on mobile. But our interface right now is primarily chat-based. And I think of that much more as like people who prefer to chat as their communication mechanism, because at this point basically everyone can text; if you look at text penetration in the market it’s like 97%.

But there’s a huge difference between my dad who text me when I text him to say, “Hey! My plane landed.” And he says, “Okay”. And most of our communication is not over text message. As compared to my fiancée and I, we just text all day. That’s our primary communication method and by the time we get home we both know what happened all day because we’ve been texting about it.

Daniel:

Right.

Ellen:

Which is really different when I go home and visit my parents, like once every couple of months, I actually get in the car and then I do like the full-hour update on the last two months. So it’s a very different experience in… One of the things I’m looking at is how do people respond to this experience if they’re the type of people who prefer to text or if they’re type of people who would prefer to wait and have that face to face meeting, or who would prefer to wait and talk on the phone and bulk their communication together.

Daniel:

Right. So how do you find out about those behaviors? I mean for something like Kickstarter, there are already different jobs, people that are backing projects, people that are promoting their own projects… the creators. How would you try to find out if there are subsegments on that and trying to chart the different behaviors that might be on a… Somehow already segmented user-base.

Ellen:

Yeah, you can. It’s actually… It’s really cool. So on Kickstarter there’s all the different project categories and you can see different behaviors emerge on different categories. So one that people talk about a lot is there’s this concept of a stretch goal, which is you met your original funding goal in Kickstarter, but if you raise us much more money you’ll get something else.

If you look at it and I haven’t worked there in a couple of years so I have no idea if this data… I don’t have any data on it, but like if I qualitatively go to the website and look around, my gut instincts says that stretch goals are far more successful in games projects, because these are people who already think about games and the games mechanism of motivating people. And stretch goals are kind of like a game. Like your friends kind of support this thing, whereas like a stretch goal on a poetry project isn’t… It doesn’t have that same tie-in that really resonates with that category. So I think one thing you see or like film projects tend to have really good Kickstarter videos and I would guess that if you had an independent film scorer rate the videos for film projects compare to other categories of projects across the board edit film would be higher because those people are film makers and they care a lot about film. So you kind of have this natural segmentation that falls out based on having those different creative categories.

Daniel:

Alright. There’s this issue… I don’t know what you can share around this but a lot of PMs seem to have actual difficulties in reaching customers due to organizational reasons or whatever. I mean, it’s usually a cultural setup or a functional setup that doesn’t let them have direct access to customers. Have you had any experience with this and how could they try to overcome this situation?

Ellen:

I don’t wanna work at those companies. I find it very stressful. That definitely… Like Microsoft is still like the nice thing about working at a smaller company is you can use your own judgment. So like when I worked at Microsoft and like all of my friends did, there were like certain things people couldn’t talk about even to their parents, to their friends. Where as now, working at Kickstarter or working at Lola now, it’s not really a big deal. Like if I wanna show one of my friends our app in a bar, like it doesn’t… No one’s like secretly following me around trying to find out what I’m working on. It’s not like I’ve got the next iPhone. It’s not a big deal. And I found that really nice because it makes it much easier to do user research. But if say in the situation where you can’t, you kinda try to do “guerrilla user research,” which I think it was being like the thing with the OneNote application where I’d ask people to take notes and see what they did. Like I wasn’t telling them what I was doing.

It wasn’t like, “I think we have this information problem.” It was kind of more of like a, “Let’s see what people do and see how they respond.” But it can be really challenging. I think… Man, I think you might get more bang for your buck in those scenarios by actually trying to change the culture and try to figure out why people are against doing research and see if you can do something small-scale. Or even if it’s not… It depends on… So, if it’s an organization that does not culturally support user research, but would not fire you for doing it, I would try to run a really small study, and then show people what you learned where it’s like, “I have these 10 people in. These are the things we took away. This is why we should act in this way.” Of course, there are like large companies where if I just brought 10 people into Microsoft and showed them data software, that would… I would’ve been gone I think immediately.

😀

And in that scenario, I think it’s either you have to figure out how you can take advantage of it, anything that is available. So like at Microsoft we had really rigorous usability studying but it was kind of at the end of the cycle so it wasn’t for that upfront customer development but then I started trying to get a feel in those usability studies for what we should do for the next cycle. And I mean it wasn’t ideal because I didn’t get to ask them a ton of questions but it was better than nothing.

Daniel:

Right. Cool, it may be so that you’re trying to target customers. You actually have access to them but they’re maybe too busy or not interested in helping you out in some way. What can you do to bring them over to your side?

Ellen:

I think… Well, I guess the last ditch resort is you incentivize them financially.

Daniel:

Right.

Ellen:

I think the other one is you incentivize them with some other benefit like, "Oh, in addition to coming and talking to us about the product, you’ll get like maybe something in the product that’s special. You’ll get a lot of… " people are motivated by early access to things which I really like. I mean I like early access to things. I’m totally motivated by that. I’ll take all the surveys in the world for early products. So that’s one, or you can… If they’re people you know, you can talk to them about why it’s important. But I think it’s really about getting something that makes them excited to be the first person trying. They’re so excited to work on it. If they’re not that type of person, I don’t know how much luck you’re gonna have with trying to persuade them to be part of the process without incentivizing them another way.

Daniel:

Okay, so I’m getting to the end of my questions here. So, would you mind sharing some bad experiences that you’ve had trying to get some customer feedback and things to avoid that you would say to other PMs?

Ellen:

Try not just to be looking to confirm your own biases and I think that’s always hard. It’s really easy when you wanna build a feature to go out and look for all the people who support building that feature but that’s not necessarily the right thing to do. You wanna look at your overall user base, not just try to support something that you believe in. And if you really just wanna build a feature because you believe in it, try to figure out why. Is it like because it’s your personal use case or is it because you actually think it would be good for users? At which point, why do you think it would be good for users? And kinda sort to disentangle that. I think it’s really tempting especially the more layers of management you have or if you have a founder you have to work really hard to convince, you start to put together these arguments for why things are a good idea and the argument becomes more of what you’re doing than actually the product idea. And I think that’s really dangerous.

Daniel:

Right. Okay, so as a wrap up, any other comments and recommendations that you would have for people that are trying to make sense of customer feedback and trying to make better decisions for their product?

Ellen:

Yeah, I just I really can’t understate how important the long term qualitative interviews are. And I would also say getting a set of people who’ve been using your products since the beginning and know how it’s changed, and know how they’re behavior with it has changed. So it’s good to get fresh users to do research with but it’s also good to have some people that you trust who’ve been using the product for a while who you can bet things with as well.

Daniel:

Cool. Well, yes, that’s it. Thank you very much for your time today.

Ellen:

Thanks for having me.