Just like Agile, the so-called Minimum Viable Product (MVP) has become a buzzword. And that’s always dangerous because it disseminates important concepts into a myriad of simplistic, and usually wrong, interpretations.
I’ve learned that many organizations still mistake MVPs as limited versions of products or services they want to launch. Huge blunder. It’s a fatal trap set by taking the term “Product” in MVP so literally. And I’m guilty of having fallen into it in the past too. The term is misleading, and I suspect Eric Ries regrets not popularizing it differently. As he acknowledges later:
“Some caveats right off the bat. MVP, despite the name, is not about creating minimal products.”
Indeed, “Minimum” and “Viable” mean different things to different people. That’s why I prefer Henrik Kniberg’s term: Earliest Testable Product. It clarifies the intention with testing, and earliest is less ambiguous than “minimal”.
But regardless of the terminology you decide to use,
MVPs are high fidelity experiments that are optimized for learning, not for scaling nor to yield short-term results.
MVPs are one way to “validate the direction you’ve taken and to confirm with strong evidence that your idea is very likely to work.” That strong evidence generates valuable insights, which are the foundation to make informed and critical business decisions.
In their book, Testing Business Ideas, Alex Osterwalder and David Bland explain that experiments are “means to reduce the risk and uncertainty of your business idea”. And the reality is that it takes a series of experiments (not just one or two…) to generate the evidence teams need to succeed.
We start with low-fidelity experiments to discover if our general direction is right, test basic assumptions, and get our first insights on desirability and viability. We then evolve — always building upon the evidence from previous experiments — into validating our direction with higher fidelity experiments. That’s where MVPs come into play. The main difference between MVPs and other experiments like clickable prototypes is that MVPs actually deliver value to the customer — when designed well. It’s a more expensive and time-consuming approach, but the bottom line is that we are still testing — not executing. This is what many established organizations struggle to understand, and I believe they struggle because they haven’t reflected enough on the origin of the MVPs.
The origin of the MVP
Throughout decades of trial and error, the startup movement has created new frameworks for managing risk, boost productivity, and actively search for new ways of growth — while keeping the lights on until the next funding round or sudden death. Contrarily to established and traditional organizations, startup teams must embrace fast experimentation because they don’t have enough resources to go do a three-year project that goes nowhere. This limitation creates a sense of urgency and forces teams to identify quick and efficient ways to learn how they can create value for their customers and capture value for their company — with the lowest possible effort. As a result, the principle of validated learning and the build-measure-learn feedback loop was born, together with the famous MVP.
In contrast to this reality, project teams or feature teams in conventional organizations are usually given a large budget upfront to implement a full innovation project. This model is called entitlement funding, the opposite of metered funding. The intentions are good, but by bringing together a group of mercenaries to implement a project and by giving them all the funding they need upfront, it’s much more difficult to generate the spirit and focus that is so important in entrepreneurship and intrapreneurship. The project has a finish date, even though we know the products and services developed under those projects don’t. When the project is “closed”, the output is shipped, the real outcomes unknown, and some of the project team members go on and execute some other project. It’s not a team creating and owning something together, it’s a group of individuals executing requirements. Most of the time, some of these people don’t feel accountable for the outcomes and, even worse, they never even get to see them unfold. This is of course a very different model than an empowered and long-lived product team working together to make their vision a reality. And this is where the clash happens: the systems and culture of the organization as a whole doesn’t necessarily support startup methodologies applied to product development, such as the MVP. In these environments, chances are those MVPs will be treated as limited versions of products-to-be-launched. This translates into a lot of time and money invested in design and delivery — even if it’s just “the first version”. In the end, after a long time, everyone will have high expectations. They are not validating anything anymore, they are executing. It’s been so long and millions have already been spent, so they sadly fall into a sunk cost trap — which means that it’ll be very difficult to pivot or kill their idea, in pursue of a better one. Persevere becomes the only viable option. And when this happens, the organization has failed to understand the whole point of MVPs.
People like the concept of “building to learn” — it sounds nice. They like the idea of building products incrementally and iteratively — sounds logical. They read about MVPs and they want to try it. But, in practice, the old Tayloristic mindset kicks in, and the existing processes in these organizations won’t set their teams up for success either. The documentation they have to write in order to build a quick MVP is identical to what they’ve always done for their big shebang product launches. The level of detail with design and pre-planning activities hasn’t changed. There’s probably a GANTT chart somewhere and a hockey stick revenue projection that has been estimated when people know the least (before actually starting experimenting in the real world). Yet, here’s the punch line: Everyone will still call it an “MVP” — because it’s the “first version” and “doesn’t have all the features yet”. It’s the MVP theatre.
Summary
One of Clayton Christensen’s takeaways from his extensive research on innovation is that your processes — how you’re supposed to do things — shape your culture. And the reality is that embracing experimentation requires a strong product culture and processes that are designed to foment trial and error, at high speed. It requires a focus on learning and an understanding from teams that value propositions and business models need to be tested methodically— regardless of how great they look on paper and how many customers said “I love this idea!”. It requires teams to be ruthless with the confidence levels in the evidence they generate, in order to gradually and responsibly reduce the risk of failure. It requires leadership to embrace uncertainty and to move from a few big bets into a lot of small ones — continuously leading their teams with strategic context and empowering them to navigate how to get there. It requires the organization as a whole to create the environment these teams need to successfully implement these techniques and beyond.