Stay ambitious with the help of experimentation
As startups grow, they tend to be less ambitious in their product development efforts. This is because mature startups have a lot more to lose when things go wrong or when they waste time building features that don’t turn out to be technically feasible or valuable to customers. I’ve heard from many CEOs frustrated with the attitude of caution that seemingly replaces the urgency of early-stage product development. Fortunately, it is possible to manage the risk of being wrong in a way that allows you to be more ambitious in your product development aspirations.
Even mature startups with large development teams are resource-poor. A startup with many development teams can juggle much more work than a lone technical co-founder at an early-stage startup, but there is still a limit. For example, a department with four teams can probably only deliver four initiatives at a time, so even mature startups must reject more product ideas than they accept. Teams need to prioritise.
A consequence of this need to ruthlessly prioritise work is that the cost of a failed initiative is high. Initiatives fail when, after substantial development effort, teams discover insurmountable technical problems, or after it is delivered, they discover nobody wants it. If your team delivers only a few major features a year, it can be incredibly disappointing when one or more of those initiatives fail. When product managers and engineers experience failures like this, they prioritise initiatives that are less likely to fail because they’d rather deliver something that works than something that doesn’t work. The problem with this approach is that when prioritising low-risk work, you deprioritise anything ambitious.
Why ambitious change is required
Great startups change the world in their image. While iterative software development can compound into massive change and value creation over time, not all iterative improvements are equal. Just because a team consistently delivers new iterations doesn’t mean those iterations add up to something massively valuable. Teams that iterate towards mediocrity may deliver much change, but not all change leads to positive outcomes.
When you have very few users, and limited confidence in the thesis behind your startup, there is an urgent need to build the product to find product-market fit. So, in the early days of startup building, the stakes of product development are existential. Ambitious change is required to overcome existential risk.
This existential risk does not go away for many startups when they achieve product-market fit. Startups must balance the need for mature software development practices with the reality that even early-stage companies are at risk of disruption. This is especially the case in immature fields where many startups aim to dominate through radically different approaches1. A startup willing and able to tackle ambitious problems is well-positioned to win in a competitive and evolving market.
Understand your assumptions
The plan for any project is full of assumptions, and all product development risks are assumptions yet to be tested. If you truly know something to be true, there is no risk in acting upon that knowledge. If you need to be right about something you’re unsure about for an initiative to succeed, your chances of success are constrained. This is why the best way to de-risk ambitious initiatives is to separate high-risk assumptions from low-risk assumptions and validate the most important assumptions in advance.
When a team sets out to build a new feature, at a minimum, they make the following assumptions:
- This feature solves a problem, and therefore customers will want it (i.e., this is a desirable problem to solve; this problem is worth solving).
- It is possible to build this feature and solve this problem (i.e., it is feasible to solve this problem; our plan will work).
The more dubious the above assumptions are, the more risk there is in an initiative. I recommend that product leaders enumerate every assumption that makes them believe a solution is desirable and feasible.
Let’s consider a team that wants to build a tool that can automatically order stock for retailers.
This team wants to build this feature because they assume that their customers waste a lot of time replenishing stock for their stores. Many teams don’t find out whether customers want a feature until they release it — this is a costly way to test an assumption.
Regarding technical feasibility, this team makes several assumptions:
- It is possible to build a one-size-fits-all algorithm for stock replenishment.
- It is possible to programmatically place orders for new stock with wholesalers.
- Our product can access the data required to determine whether new stock should be ordered.
If some of these assumptions prove invalid, the project could fail. Many teams don’t find out whether it is possible to solve a problem until part-way through the development process.
Use experimentation to manage risk
It is wise for teams to test their assumptions before they commit to an initiative. If you can identify the assumptions important to your initiative, you can test them.
I recommend that teams score each of the assumptions undergirding their initiative on two dimensions:
- What is the chance of being wrong? How confident are you that you are right about this assumption?
- What is the cost of being wrong? What are the consequences of being wrong? Will the project fail if you are wrong about this assumption, or will you be able to find another way?
If you can score each assumption this way, you can easily identify which assumptions to test before starting an initiative. For each assumption:
- If you have low confidence, and the cost of being wrong is high, you should test this assumption before you start work on your initiative.
- For example, if you have made a big assumption about technical feasibility that you have low confidence in, you should build a proof-of-concept of that component of your solution first2.
- If you have made a big assumption about desirability, you can interview your customers to determine whether they care about the problem you want to solve3.
- If you are confident, but the cost being wrong is high, it might be worth further investigation.
- If you have low confidence, but the cost of being wrong is low, you probably don’t need to worry about this assumption.
- If you are confident, and the cost of being wrong is low, you probably don’t need to worry about this assumption.
Let’s return to our team that wants to build a tool that automatically orders stock for retailers.
This team wants to build this feature because they assume that their customers waste a lot of time replenishing stock for their stores. The first thing they should do is talk to their customers to see if they care about this problem. If possible, it would be great to quantify the problem. For example, is it possible to measure how much time a retailer would save if this problem was automated away?
Regarding technical feasibility, this team makes a number of assumptions:
- It is possible to build a one-size-fits-all algorithm for stock replenishment.
- This assumption can be tested with some experimentation. Task a developer to build a proof-of-concept algorithm for stock replenishment.
- It is possible to programmatically place orders for new stock with wholesalers.
- This could also be tested through a proof-of-concept or some research into the various APIs that are available.
- Our product can access the data required to determine whether new stock should be ordered.
- A developer could validate this assumption with some simple data analysis. How many customers have populated the data required to replenish stock?
Most teams find that most of the implementation work for an initiative is low-risk and that a large, ambitious, risky initiative can be validated through just a few small experiments.
Footnotes
For a while, Sketch looked like it was going to be the company to finally disrupt Adobe. Then Figma launched and proved the superiority of multiplayer cloud-native solutions. Just because Sketch had achieved product-market fit, was still growing, and was not yet the incumbent, did not mean it was safe from disruption. ↩︎
Many teams I have worked with the use spikes for these investigations. ↩︎
At one startup, we launched a brand and a website for a new product before we had built anything. We even ran a small marketing campaign to estimate how easy it would be to market this new product. ↩︎
Privacy and terms
I will only use your email address to send you this newsletter or to reach out to you directly, and you can unsubscribe at any time. I will not share, sell, or rent your email address to any third party, though I do store it the software I use to dispatch emails.
The information provided on this blog is for informational purposes only and should not be considered investment advice. The content on this blog is not a substitute for professional financial advice. The views and opinions expressed on this blog are solely those of the author and do not necessarily reflect the views of other organizations. The author makes no representations as to the accuracy, completeness, currentness, suitability, or validity of any information on this blog and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. The author may hold positions in the companies or products discussed on this blog. Always conduct your own research and consult a financial advisor before making any investment decisions.