One of our main goals as Product Managers is to focus on delivering value to our users. But what exactly do we mean by Value? Here’s a definition I like (from Wikipedia):
Value in marketing, also known as customer-perceived value, is the difference between a prospective customer’s evaluation of the benefits and costs of one product when compared with others. Value may also be expressed as a straightforward relationship between perceived benefits and perceived costs: Value = Benefits / Cost.
The customers get benefits and assume costs. Value is thus subjective (i.e., a function of consumers’ estimation) and relational (i.e., both benefits and cost must be positive values).
Within the world of Software Products, we can translate Benefits and Costs to tangible things. Benefits are the features we provide — or better yet, what the user is enabled to do. Costs are what users pay to get to those benefits: the literal money ($) and the time spent interacting with the software.
Throughout their entire relationship with the product, our customers are always doing some sort of cost-benefit analysis in their heads. From the first time they land on our website to when they become regular users, they are actively (or passively) deciding if the software’s benefits are worth its costs. That means that it’s our job as PMs to provide the best Benefits / Cost ratio we can. Using this measure, our decision making may be guided by identifying lower scores. We could know where it’s worth to rethink what we have and even where to cut stuff from the product.
Feature Growth ≠ Value Growth
Working on a new feature means having a good understanding of the user’s needs, defining what the feature should provide and focus on how best to deliver it through a great UX. According to our definition, this should lead to added value for the user.
For individual features and early-stage products, this is usually the case. But as a product grows, there’s a natural tendency to accumulate baggage. Creating new features or changing existing ones is an additive process and decisions are often taken without considering their impact on the overall product.
Also, as we grow the product, it’s natural for there to be a lot of additional complexity (cost) to the user. Either by ill-conceived features or simply by just having more “buttons to push”. Andrew Chen calls it “Product Design debt” and Jared Spool uses the term “Experience Rot“.
Thus, the value of existing features may actually get diluted as we add new ones. The product’s overall value may be growing as we’re adding new features, but we’re also increasing its costs to the user. Customer value doesn’t grow at the same rate as Benefits.
Being conscious of this trade-off may be the most important takeaway from this article, but wouldn’t it be nice if we could somehow measure it?
Let’s say we get a fixed amount of “benefit units” to score our product. Say, 100 points. How do we distribute them among our features? We could use our product’s overall value propositions and key objectives and score each feature accordingly. This is based on a common prioritization game.
A little-known photo sharing service has this value proposition:
Capture and Share the World’s Moments
Instagram is a fast, beautiful and fun way to share your life with friends and family.
Take a picture or video, choose a filter to transform its look and feel, then post to Instagram — it’s that easy. You can even share to Facebook, Twitter, Tumblr and more. It’s a new way to see the world.
So our scoring could work out to something like this:
- The actual benefit scale you use is not important, as long as it’s limited: you don’t get “infinite points” to distribute on all your features. The whole idea is to force you to think about the relative importance of each feature vs all others.
- I tend to prefer not including “trivial” features (e.g. signup, login, forgot my password) in this type of analysis. They’re pretty standard by now and are required for most products. Still, there may cases where it’s valuable to include them.
- The scoring should follow the same criteria among all features: whether it is some KPIs, business objetives or any other product measure. For any of this (and what follows) to make sense, this should be consistent.
- The scoring should be done by multiple stakeholders (senior management, marketing, support, etc.), and then averaged. This can smooth out the possible subjectiveness when evaluating features. I’ll be exploring other possibly more objective scoring techniques and post updates here.
Going through this exercise is by itself useful, as it forces us to realize how each feature is aligned to the product’s goals. Doing this retrospectively on an existing product may also lead to interesting conclusions. Anyways, this is just part of the framework. Let’s move on to the Costs side of the equation.
Like I said before, Costs are what the user pays to get to the benefits of the product. This may be money, time or any other trade-off. At this level of analysis, it’s best to just focus on the interaction cost, but the model should apply to other cost metrics.
Interaction cost is defined by Nielsen like this:
The interaction cost is the sum of efforts — mental and physical — that the users must deploy in interacting with a site in order to reach their goals.
Any step that the user has to perform for a certain task adds up to the interaction cost. Our goal is to minimize it, targeting unnecessary reading, scrolling, clicking, tapping, etc. Information Aesthetics, based on Nielsen’s seven stages of action, summarize the types of costs we have to consider. They’re talking about interaction costs in Information Visualization, but it (almost) perfectly applies to any kind of software:
- Cost of Decision: how does the user get to her goal? How clear is it?
- Cost of System Power: there are a lot of options; which one should the user choose?
- Cost of Multiple Input Modes: the confusion that arises when there are inconsistent and/or multiple interactions (single tap, long/short swipe, button, etc.)
- Cost of Motions: how easy it is to reach and hit a target. See Fitts’s Law
- Cost of Visual Clutter: how much does the user have to scan around to find the content he wants?
- Cost of State/View Changes: as the user performs his tasks, the software goes through multiple states; each one needs to be scanned to understand where he’s at
Measuring all of these individually would be very tedious for any non trivial product. Luckily, there’s a shortcut. All of these costs reflect on how much time the user takes to perform a task. If we measure the average time users take interacting with a given feature, we get our measurement of cost. If you want to get fancy, you could also measure this by user personas or other cohorts.
Keeping up with the example above, here are some time measurements I did on myself:
You may note that the first 4 features are actually part of the same thing: posting a photo from a picture taken at that moment. I separated them to better quantify each part of the interaction, both in terms of benefits (previous section) and in terms of costs, because it helps in the final part of the model, as we’ll see in a bit. In any case, this is just a very simple example and your use cases will dictate how you go about this.
Getting to the Customer Value
So we’ve got Benefit and Cost scores for our set of features. How do we calculate the customer value? Well, we just calculate the benefits / cost ratio, giving us the customer value by each:
In green, we have the “best performing” features with regards to value, and in red, the worst ones. Through this method we may conclude that Instagram optimizes customer value for photo-taking but not so much for searching. Again, this depends on how you score the benefits, but it is a pretty fair conclusion.
This scorecard can help us see where we’re under-delivering customer value and guide our decision-making, which is very useful.
But we can take this one step further. If we think about how each feature connects to others, we can find “value paths” and apply the same sort of analysis to whole areas of our products.
This technique is still not battle tested, but I’ve had good preliminary results. I would love to hear back from you on your experience applying it to your products. Just get in touch with me or leave a comment below.