Why confidence should be factored into ROI estimates
Comparing the value of doing something to the cost of doing it is the most common calculus for Return on Investment (ROI), employed by many tasked with the prioritisation of work. The goal of this is to determine the margin of value for various endeavours and prioritise them accordingly. This usually looks something like this:
Return on Investment = Benefit ÷ Cost
Benefit is the expected return (e.g., a new feature will bring $200,000 of new revenue) and Cost is the amount that must be invested to achieve the expected return (e.g., 15 hours of work at $100 per hour is $1,500).
While this seems like a pretty reasonable and logical way to calculate the value of doing something, it comes with the dangerous assumption that we actually know how much value we’ll get from the initiative and how much it will cost. Assumptions around both expected value and effort required are usually wrong, often dramatically. The outputs of algorithms like this can lead you astray if the inputs are incorrect, making prioritisation based on this method an exercise in futility. This is why teams should factor confidence into their ROI estimations, recalculate and reflect on ROI estimates after work has been completed, and assemble around long-term areas of focus where multiple hypotheses can be tested.
A simple way to improve the algorithm for ROI is to include confidence as a factor. You can do this by simply polling the team on how confident they are in both their cost and benefit estimations. This new algorithm might look something like this:
Return on Investment = ((Benefit × Confidence in the Solution) ÷ Cost) × Confidence in the Estimated Effort
This new equation is tempered by the confidence of the team, who might vote on confidence at the same time as they estimate the size of the work involved. We’re penalising the expected benefit of doing the work with our confidence that the solution will bring the expected benefit (e.g., if we are 50% sure our solution will bring us the expected $200,000 of new revenue, we lower our expectations to $100,000). We are also penalising the entire ROI by our confidence in our estimate of effort. There is no single right way of doing this and you might want to tweak the algorithm to penalise the expected ROI in different ways.
By factoring confidence into our estimates of ROI, initiatives with a higher level of confidence are more likely to be prioritised over initiatives where the solution might need more exploration and the expected return needs to be better scrutinised. Highly effective teams will bring forward research and investigation (e.g., spikes) to improve their confidence in future work, before committing to risky initiatives packed with untested assumptions. Sometimes, teams might still decide to tackle work that they have low-confidence estimates for. Confidence factors are still useful in these scenarios because they provide a tool for teams to manage stakeholder expectations (i.e., we know we’re taking a big risk here, but we think it is worth it).
Reflection is another valuable tool for teams looking to estimate ROI. Even seasoned teams can find it very difficult to estimate ROI in advance. Predicting the future is very difficult, and our best tool for improving our ability to do so is the retrospective. This is why it can be valuable for teams to re-estimate ROI after an initiative has been completed and the benefit and effort inputs are known. Regularly reflecting on whether a solution worked and what made our assumptions around value and cost accurate or inaccurate will make future estimates more reliable, even if only by better tuning our confidence in our estimates.
After a few cycles of estimating both value and effort, and reflecting on actual outcomes, many teams come to a common conclusion: big problems are rarely solvable through a single chunk of effort. This is why long-lived teams that are consistently focused on owning areas of improvement for the long term are usually the most successful. This is an alternative to bouncing between unrelated projects that attempt to solve diverse problems, which may only ever provide a degree of success, and is one major reason why hyper-focused products tend to win. Many solutions to one big problem trumps many solutions to many problems, because the cost of failure is lower and the compounding value of effort invested is more easily realised.
Privacy and terms
I will only use your email address to send you this newsletter or to reach out to you directly, and you can unsubscribe at any time. I will not share, sell, or rent your email address to any third party, though I do store it the software I use to dispatch emails.
The information provided on this blog is for informational purposes only and should not be considered investment advice. The content on this blog is not a substitute for professional financial advice. The views and opinions expressed on this blog are solely those of the author and do not necessarily reflect the views of other organizations. The author makes no representations as to the accuracy, completeness, currentness, suitability, or validity of any information on this blog and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. The author may hold positions in the companies or products discussed on this blog. Always conduct your own research and consult a financial advisor before making any investment decisions.