With the popularization of lean startups, minimum viable products (MVPs) have recently entered into business and software lexicon. Who can argue with building more than you actually need?
Many people seem to interpret MVP as the first iteration of their product. Once they build that version, they can add more features, and users of the product will be even happier than before. Businesspeople sometimes talk about needing to build an MVP so they can launch and raise more funding.
If you are building out a half of a product as your first stab, you might as well just call it version one or iteration zero or something like that. No sense in polluting the MVP term.
In this article, I will argue that most so-called “MVPs” are not really MVPs because they are not focused on the process of learning, and as a result, wasteful. I think that there is a lot of value in not trying to build too much. This low-hanging fruit likely accounts for the proliferation of the term. But I think that a lot of the value of an MVP is testing the risky assumptions every startup has.
Definition of minimum viable product
Well, what is a minimum viable product, anyway?
A Minimum Viable Product has just those features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent. “The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” The definition’s use of the words maximum and minimum means it is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.
A MVP is not a minimal product, it is a strategy and process directed toward making and selling a product to customers. It is an iterative process of idea generation, prototyping, presentation, data collection, analysis and learning. One seeks to minimize the total time spent on an iteration. The process is iterated until a desirable product-market fit is obtained, or until the product is deemed to be non-viable.
Wikipedia on MVPs, all emphasis mine
The reason landing pages are so popular as a form of MVP is not because they are the easiest thing to build. Often times they are very easy to build, but that is not the whole reason. The reason is that they often give a good bang for the buck (or time spent, ROI, etc.) for your current assumptions. With a landing page, you can test whether people understand the idea you have, collect metrics on the best ways to attract users, and whether anyone at all will sign up.
Yes, at certain points, your MVP might actually be a landing page with a value proposition and a way of learning from it. It might be going to a bus stop and convincing people to get in your car to test a new carpool web app idea. Sometimes it’s a super-limited version of your product, meant to test a set of assumptions. It could be a paper prototype that you show to earlyvangelists to talk about your value proposition. It might be you just pretending to be a magical algorithm that solves your supposed customer needs.
You should start with the riskiest assumptions that you can test and try to make them fail. Here is why you should start at the bottom of the risk validation pyramid.
What do you want to learn?
Here are my concerns when the term MVP is used loosely:
- there is little emphasis on what assumptions the MVP seeks to [in]validate,
- there are no clear success or failure criteria, and
- there might be an easier way to learn as a result.
Here’s how Eric Ries frames this anti-pattern:
Most entrepreneurs approach a question like [“how many customers will sign up for a free trial given what we believe is enough information?”] by building the product and then checking to see how customers react to it. I consider this to be exactly backward because it can lead to a lot of waste. First, if it turns out that we’re building something nobody wants, the whole exercise will be an avoidable expense of time an money. If customers won’t sign up for the free trial, they’ll never get to experience the amazing features that await them. Even if they do sign up, there are many other opportunities for waste. For example, how many features do we really need to include to appeal to early adopters? Every extra feature is a form of waste, and if we delay the test for these extra features, it comes with a tremendous potential cost in terms of learning and cycle time. The lesson of the MVP is that any additional work beyond what was required to start learning is waste, no matter how important it might have seemed at the time.
Eric Ries, The Lean Startup pages 96-97
Let’s pretend you have an idea for a software product. You think through all of the different features and what you think people would most like, and select what you consider to be the most valuable, easy to make, and coherent subset of those features to build in a month. Then you build those features. You launch the product, and no one seems to be interested. What do you do?
If you create something and don’t have a good way of learning from what you are doing, your options boil down to:
- Retry: Change the product in some way and try again. Maybe it was that non-essential feature that you left out of the last release.
- Travel: Pivoting (another often imprecisely used term) is moving in a slightly different direction with one foot grounded in learning. Traveling is heading in some direction with the product or feature without having validated your hypothesis.
- Fail: Quit without having learned much. Try another idea.
(I originally thought of this in terms of abort, retry, fail, but as the failure of that error message centered around the confusing nature of the words, decided to make it a bit clearer instead.)
All of these are outcomes are undesirable due to the amount of waste involved (some sum of human energy, money, and time spent without much learning.) Again, this probably stems from not testing some risky hypotheses at a small scale.
Poorly defined expectations lead to fuzziness at the time you most need clarity. When done with an experiment, you should have a clear sense of “is this the outcome that I wanted to see or not?” If the answer is a clear no, you can think about what you might need to do to get a different outcome. If the answer is yes, or better than you expected, then you can continue with confidence. If you don’t say up-front what customer actions you expect from a certain action, you’re left with lukewarm results that anyone can interpret in any way.
The overhead of learning
MVP, despite the name, is not about creating minimal products. If your goal is simply to scratch a clear itch or build something for a quick flip, you really don’t need the MVP. In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.
Eric Ries on MVPs
I like this quote because it introduces the idea that thinking about what we want to learn is critical when we build. The build-measure-learn (BML) loop is how things play out in time. However, we should first focus on what we want to learn, and then how we are going to measure it to dictate how we should build what we are going to build. The BML loop should be thought through in reverse to ensure that the experiment results in learning. The quicker we can get through that cycle, the faster our startup moves. Without learning, we aren’t really going through the cycle, and as such, are cutting out the feedback portion of the feedback loop.
The key questions
So here are my new questions for MVPs. If someone says they intend to “build an MVP” (the build part itself might be a tell), I am going to ask:
- What are you trying to learn with this particular MVP?
- What data are you collecting about your experiment?
- What determines the success or failure of the experiment?