The Definitive, Final, Real, True Definition of an MVP
For the past, several months, we've been helping some of our clients through the process of defining and designing MVPs for their software projects. If you're unfamiliar with the term, MVP stands for Minimum Viable Product.
Not long ago, I was putting together an MVP planning workshop for a client, and I searched around to see what kinds of things other people were doing that might be helpful.
As I did this, I discovered a certain trend:
There were several different definitions of what constituted an MVP
Many people who offered a definition of MVP had a strong opinion that everyone else's definition was wrong
You could see it in blogs. You could see it in Twitter wars. You could see it in witty memes. An MVP is a login screen. An MVP is a few features of every part of an application. An MVP is a fully built single portion of an application. An MVP is an ordering page for your application that hasn't even been started yet. "That's not an MVP. This other thing is."
This landscape has got to be confusing for anyone considering trying to get an MVP out the door for the first time. Any given Google search will yield dozens of people telling you to do different things and assuring you that everyone else is steering you wrong.
What accounts for this? Where are these different ideas coming from and, more interestingly, why is everyone so sure that their definition is the only one that works?
The reason is that the definition of an MVP is relative to the question you're trying to answer.
An MVP is the smallest thing you can produce that will give you the data you need to answer a question or verify a hypothesis.
Since different organizations are answering different questions or testing different hypotheses, the scope and contents of their MVPs will be different, and other scopes and contents would be totally wrong for them.
So, what does this look like, practically speaking?
MVPs Have Little Reason to Exist Apart from a Question/Hypothesis
The whole point of an MVP is to get answers with the smallest level of investment you can make. It's a way to control risk. You don't want to spend a year building a product only to discover that nobody wants it. You don't want to create new software for your business only to find out that it makes everyone less productive.
It would be nice to learn all these things after a very small investment of time and money, and that's the situation the MVP is designed to address.
Before you consider an MVP, you need to decide what you're trying to find out. If you don't have a question or a hypothesis, you don't need an MVP. In fact, you'll probably struggle mightily trying to define your MVP because you have no goal for it.
MVPs are not best practices. They're not "the way you do Lean business." They're definitely not something you do just because you read about it in a book or a coach told you to do it. They are a tool to help you gain knowledge in the most efficient, least risky way.
So, the first step to defining your MVP is defining what you're trying to figure out.
Will people buy this product?
Will this product make us faster?
Can I use this product to do my job?
Those are all different questions and all suggest different MVP scopes and content.
Your MVP is Defined By Your Question, Not an Objective Definition That Applies to Everything
Let's say you're trying to figure out if people will buy your product. What's the smallest thing you can produce to get a reasonable answer to that question?
Well, "the whole product" isn't the answer. Is it a very light version of the product? A Phase One version?
It might be that. But if you think about it, you can probably get a decent answer to your question without building any of your product at all.
You could make mockups of screens and see if potential clients are interested. You could set up a Kickstarter page and see how much funding you get. You could create an ordering page in your online catalog and backorder everything and see who buys before you even build anything. You could send out an email survey.
Any of those things might be sufficient to give you the information you need, and you didn't need to build a single feature. Note that all those things I mentioned are products. You produced them. They just aren't THE product that you're wanting to sell. All those things I mentioned could be MVPs specific to the question: will people buy the product I'm planning to build?
Let's say you're trying to figure out if the new software product you're designing for a line of business will help those employees be more efficient. What's the smallest thing you could build to learn that?
Well, one thing you could do is take their simplest business scenario and build only what you need to satisfy that scenario, then see if it makes them faster (or whatever benefit you're hoping to get). If it does, great! You know you can keep heading in that direction. If it doesn't, then you know you need to pivot. Maybe tweak the performance or make the screens more usable. You could even just drop the project altogether until you understand the needs better and use those resources somewhere else.
But as you can see, "The features needed to accomplish our simplest operation" and "a Kickstarter page" are very different products. Yet both of them are MVPs because, you guessed it, they are the smallest thing you could produce to answer the given question or test the given hypothesis.
One isn't right or wrong, they are both right in the context of the right question. They are both wrong in the context of the wrong question. "A Kickstarter page" will not tell me if my product will make the Sales department more efficient, and "the features needed to accomplish our simplest operation" is way too much to figure out if someone is interested in buying our product.
One of our clients recently asked for some help putting an MVP together around a product they were creating to replace software one of their groups was using to produce a complicated packet of contracts. The software was slow, clunky, and difficult to use, and the idea was that they could write new software that would help them get more contracts out the door.
As we began to map out features, it became very clear that at least some people in the room were thinking of a different question. They were thinking, "What would I need in this software to be able to switch over to it to do my job?" Well, that's a pretty big list of features. Dozens and dozens, as it turned out.
So, we had the feature list for that question, but that wasn't the real question, was it? The real question was:
Will this software help us produce contracts more quickly?
I asked the people in the room, "If you had two teams, and they were having a race to generate your simplest contract, what would they need to be able to do in order for us to conduct that race?"
Well, that was much less. As we got to talking, we realized we could conduct that experiment just by generating the first portion of the contract packet. Do you know how many features they concluded they needed to answer the question: Will this software help us generate the first contract portion more quickly? Out of all those dozens of features?
There's no question that four features out of dozens are not enough for them to switch over to it instead of the old software. But it turned out that four features are enough to figure out if the new software was going to be faster than the old software.
We could deliver those four features (which involved figuring out architecture, some of the big technical risks and hurdles the old software didn't address, getting a new UI designed, etc.), get it in front of users, and see if it made them faster. If it did, we could keep going down that track. If it didn't, we'd know we needed to change fundamental things about the software before we'd built four dozen other features. We might even have learned that there was nothing we could do to speed up the operation.
All for the investment of four features. Four is less time and money than four dozen.
But That's Not Enough
You're right. Ultimately, in the example above, the desire was to replace the entire software package, not just produce the first portion of the contracts.
So, after our MVP has verified what we need to know, then we move on to our next MVP, which is most likely another section of contracts. Then our next. Then our next. We keep asking the next most important and next most appropriate question, and our next MVP will be identified based on the smallest thing we need to produce to answer that question.
With the client I mentioned above, we put little dots on the features that would mark our next couple of MVPs.
Here's the kicker. At some point, we will have enough. We will have answered our questions and verified our hypotheses. And as it often turns out, it may be that we ended up having to build less to replace that software than anyone thought at the beginning.
Such is the power of the MVP.