You’ve been there. A customer asks for a thing they consider to be an easy ask and it’s not in the current product. It might actually be easy or it might be quite hard – you don’t know yet (and you have a sneaking suspicion for one or the other).
You could say “no, not ever”, or “not yet”, or “absolutely – we’ll do it for you” – there are lots of ways to solve the request side of this equation. Those solutions, however, are intimately linked to the way you go about developing your product features.
Committing to building a feature – whether it’s something you intended on building anyway or whether it’s a brand new request that fits into that strategy – requires you to define a Minimum Viable Feature. This description should contain a statement of the problem you’re trying to solve, specifically the Job to Be Done, who the feature serves, and the potential impact created by the feature. Your definition also has to be built in the context of the existing technical capability and business direction of the product.
A Minimum Viable Feature is not just the lowest common denominator of the thing the customer wants you to do and the way you want to do it. It is a carefully considered construction that delivers the job the customer wants to accomplish while laying the groundwork for how similar customers might also want to use that capability in the future. If you put your Future You hat on, you might say that the best feature design helps anticipate and address the future challenges you’ll have while not making people wait until you get there to get 80% of the benefit.
Let’s say you were building an app that let customers tell you about a home improvement problem and you wanted to get as much detail as possible from them so you could accurately estimate the issue. The simplest solution? Ask them to tell you about the scope of the problem, and perhaps take a picture of their leaky sink. The most complicated solution? Take a video of the sink and automatically diagnose the problem. The Minimum Viable Feature version of this might be a highly targeted survey that walks you through the most common problem areas of a specific home improvement area and then instructs you how to take the most helpful video or picture of a specific area to get the maximum input for your effort.
Your version of the Minimum Viable Feature will differ – but the key is to deliver enough functionality and fidelity to the job the customer wants done while building a path to the future of this feature. The more often you do this and the more specific you are about the customer, the benefit, and the way you’ll know if you’ve succeeded or failed, the closer you’ll get to that ideal.
This essay is written as part of the Startup Edition project – check out the other essays here.
It would be awesome if your first iteration of a minimum viable product (MVP) perfectly addressed your target market segment, delivered great value to your customers, and you never had to change it again. However, that’s not what happens. Your first MVP iteration is the beginning of a build-measure-change cycle. When done right, you’ll deliver the product your customer wants to use for the job they want to get done. So how do you figure out how to find that customer, understand what they want, and deliver that product to them faster?
Finding your customer is the first task to making your MVP less wrong. If you’re baking cupcakes, who buys them? If you’re making software, what is the general profile of the person who should need what you’re offering. And what problem are you solving for that customer? A good problem statement for baked goods might be: “I’m delivering a donut for an underserved market that has specific allergy needs for people who like breakfast snacks once a week.”
Now that you’ve made a statement that matches what you think your customer might want, you should ask them what they want. This action can take many forms, from informal surveying of friends to more formal methods like online surveys, usability studies and tests. You need to be able to answer the question: what does your customer want? You might find they want different things than you think that target customer wants. So ask the question “do you ever eat donuts?” And also the question “what sort of donuts would you like to eat?”
You can uncover a more nuanced version of this question by asking what your customer needs. Often this need displays as a pain or discomfort that the customer wants to avoid. For our baked good example, a customer allergic to nuts might have very strong physical symptoms when eating a product with nuts – in fact, the decision could be life-threatening to some. Consider how strong that statement is: what does your customer need? Customers will display needs differently than wants, so make sure you watch what they actually do in a given situation rather than just asking them how they feel. Then, after you observe the need in action, ask them how they would feel when that feature/attribute/product is taken away. (Would they pay to keep it?)
If you can find a customer, ask them what they want, and uncover some of their needs, congratulations! You’re well on your way to developing your plan for an MVP. So why can you deliver this benefit better than anyone else? A suggestion: you won’t be able to deliver every benefit better than anyone else in the world. So focus on a small (a really small) thing that you can do better than anyone. And soon you’ll understand whether you picked the right small thing to focus on and whether your customer cares that you’re solving their problem.
You should also ask yourself – why is right now the time to deliver your solution? Try to answer the question: what’s the trigger for my customer to buy to relieve their pain by using my product? If you can deliver that benefit at the right time for the right customer better than anyone else, you’re getting closer. And if you have managed to avoid “boiling the ocean” by focusing on a small thing that you can measure, test, and learn from you’ll have an even better chance of making your MVP less wrong. At some point you also need to know whether the combination of the customer’s pain and the solution matches the set of things you can do at a reasonable cost.
How can you make your MVP better? Make sure you ask valuable questions of your prospective customers. Acknowledge their needs and their wants and respond by demonstrating that you’ve heard their needs and delivered something you believe addresses those needs. And build with the idea in mind that you will measure specific outcomes, learn from the actual behavior of your customers, and then change the MVP to make new experiments that get you closer to being less wrong, quickly.
This essay is written as part of the Startup Edition project – check out the other essays here.
Let’s say you’re starting a new company, feature, or product. You have an idea that you want to test with some beta customers. So what would you do today to “get out of the building” (in best Steve Blank Lean Startup style) once you’ve done some initial customer development to determine your Minimum Viable Product and have some ideas and data about the kind of customer who might use or provide feedback on your idea? One way to do this to set up a beta program, where you combine your ideas, your preliminary feedback (and some actual people) to see what will happen. (The feedback will likely be both sweet, and a bit tart.)
What are the goals of a beta program?
At face value, the goals of a beta program seem simple: find some potential customers, ask them to use the product, identify bugs, and get feedback on what’s working and what’s not working in a defined period of time. “Customers” probably look the same as the initial persona you identified during your customer development phase, and since you’re looking for a directional indication at this point, you don’t have to get them completely right (the beta program is an extension of your customer development efforts.) “Using the product” and “defining bugs” will work best if you define a few tight scripts at first to help people understand your vision of what they should be doing, and not just what it looks like in your prototype. And “getting feedback” means finding out the most important thing to improve or fix at that point of your development process.
The Perils of Talking to Customers You Know
Finding potential customers is the first step, and also not the easiest. When you start, you’re likely to ask people you know – which is great because they will be more receptive and accommodating of problems, and not so great because they’re biased to give you good feedback – so you need a mix of people you know and don’t know. One way to solve this problem is to ask the people you know to recommend 2-5 people that they know who will provide practical feedback and who don’t know you all that well. Once your group reaches 30 people, you’ll be able to know more confidently that you have at least directional statistical information.
Practical tip: manage the list of customers in a Google Spreadsheet, identifying their name, email, Twitter handle, external ID in your system, “customer type” (this could be ‘experienced, noob, mainstream’ or another taxonomy), the date they joined the beta or the identifier for their cohort group, a comment field, and a “last contacted” date. This will allow you to model the list of beta customers by cohort, give you a way to communicate with them, and provide you with a data structure you can use to pivot their feedback by customer type, externalID, beta cohort, and date range.
Using the Product is Not Following The Instructions
It’s tempting to think that customers (any customers) will “read the manual” and follow your instructions to the “T”. Well, when was the last time you read the manual? It’s important when considering your beta group to include both those people who are likely to follow your instructions, those who are not very imaginative and who just want to “hire your product” to do a job for them, and those who want to break your product or will think of unusual ways to interact. And if you want really great bug descriptions, you need to make it really easy for customers to provide inline feedback and to give them a prompt every time to identify the key items you need to understand what’s going on.
Practical tip: provide instructions for your scenario using multiple learning styles, including listing items in an ordered list (“Step 1, Step 2, Step 3”), asking an open-ended question (“What’s the 1 thing you’d like us to improve”), and communicating in other media, e.g. asking them to record a Skype conversation, a screencast, or setting up a group Google+ Hangout to discuss the “how do you do it” aspect of your product.
The “90-9-1” rule for participation inequality suggests that some of your beta participants are going to give you a LOT of information, and the feedback they provide will be overweighted toward the one or five or ten people who feel really passionate in your first group of 100 or so beta participants. So how can you take what they’re telling you and make it more meaningful? First, you should identify the functional bugs they point out, and fix those: these are potential blockers for any new customers. If you’re not going to fix an issue, log it and let the reporter know you’re placing it in a lower priority queue (and then once every month or so, prune the queue aggressively to remove the “not-urgent not-important” items.) Second, you can use a Kanban or other scrum technique to organize the volume and priority of the work (Asana, Trello, Do, Jira, Pivotal, and others are all good for this task). And third, keep asking your beta testers short surveys frequently (1-3 minutes) to see whether what you think you’re doing is actually working for them.
Practical tip: use some bug tracking software to manage this process. Don’t reinvent the wheel: if you don’t like Jira, Bugzilla, or another solution you can always find a bug tracking template in another tool. And make sure you identify the type of customer for which you’re solving the problem; the severity of the issue (does it stop them from using the product; is it a major deficiency; or is it just nice to have) and the priority (get it done now, get it done soon, log it to see if it becomes more important later).
What’s realistic to expect from your results?
It’s reasonable to think you’ll get feedback, and that some of your customers will like (or even love) what you’re working on for them. And it’s also likely that some will dislike, or even hate, the thing you think is cool. To get the most mileage out of this feedback, use a dedicated email alias or distribution list (e.g. beta-feedback@) to share this information with as wide a group as possible within your company. And view the feedback in the context of the type of customer who’s providing that feedback. When you get positive feedback clustered among multiple people in the same persona in the same area of your product, that’s a good sign you’re heading in the right direction. And then use that feedback as a data input to make decisions: what kind of company do you want to be? Which customers are you serving, and will you serve them better by improving this feature or fixing this bug? If the answer is yes, it’s time to invest time and money in making that change.