Prioritization is at the core of product management. It's easier said then done in most organizations, where different opinions exist about what is important and why. As a product manager looking for help with prioritization, it won't take long before you run into scoring models such as RICE.
I have used RICE several times now, and was left feeling dissatisfied every time. After my most recent disappointment, I wanted to better understand why. I found three excellent articles by Jens-Fabian Goetzmann (JFG) and Saeed Khan (SK) that are critical about weighted scoring models such as RICE. In this article, I will build upon those two articles to provide insight into why RICE doesn't deliver:
How Do You Prioritize What To Build?You don’t need a product prioritization framework, but a pyramid, by Jens-Fabian Goetzmann (link)
The Problem with Prioritization Frameworks Why prioritization frameworks have little value for empowered product teams, by Jens-Fabian Goetzmann (link)
Why You Should Avoid Prioritization FrameworksPrioritize based on objectives and strategy, not spreadsheets and formulas, by Saeed Khan - (link)
The Source of RICE
RICE is part of a longer history of (weighted) scoring models. Weighted scoring models use numerical scoring to rank ideas by means of a combined score that results from benefit and cost categories as its inputs. According to ProductPlan, "It is helpful for product teams looking for objective prioritization techniques that factor in multiple layers of data." (emphasis mine)
RICE is very similar to ICE (Impact, Confidence, Ease). ICE is popularized by Sean Ellis, author of the book Growth Hacking (source). Another Sean in the Growth domain co-developed RICE; Sean McBride, a PM at Intercom. McBride wrote this article about it. Sean and colleagues in the Growth team at Intercom developed RICE because they. “...wanted to come up with a more structured method of comparing many different project ideas in terms of their potential impact on a single goal (conversions)” (source).Allegedly, Intercom has stopped using RICE.
So, what’s the pitch for RICE? "RICE will help you make better-informed decisions about what to work on first and defend those decisions to others." (Intercom). And there appears what I call the siren call of prioritization frameworks such as RICE; they have an air of objectivity about them that promises to help the PM defend their decisions. Decisions that are fundamentally messy in nature become numbers-based.
However, without careful consideration of when and how to use RICE, you are bound to be disappointed - and here's why.
The Problems Surrounding Prioritization Frameworks
The frameworks assume a singular prioritization process where everything is compared to everything else, all at once. Furthermore, they promote a mindset of prioritizing features. However, other things should be prioritized first before features.
Pre-Prioritization: think about prioritization in layers
RICE is an exercise where everything is compared all at once. This quickly becomes overwhelming.
JFG invites you to see the prioritization process in discrete layers, including vision, strategy, and business goals. JFG visualized this in a pyramid where you see Vision & Strategy on top and features (Solutions) at the bottom. Rather than assuming a singular prioritization exercise where you compare everything all at once, JFG promotes a step-wise prioritization. Each step being fundamental to the next.
Prioritization starts with vision and strategy and from there, to business goals, problem areas and all the way to solutions in the base of the pyramid. Each layer is a guide post that informs the next. By the time you get down to features you’ve already done a huge part of prioritization, and some features that you otherwise would have scored will no longer be relevant. You end up with a shorter list of things to consider.
This step-wise prioritization also provides a frame of reference for when new opportunities arise. It helps you to focus on the right layer (Problems / opportunities) and reprioritize there, without having to pay attention to the higher-level layers. Instead of having to reconsider everything at once, a new opportunity no longer puts everything into question. Again, the product manager and stakeholders have a smaller flood of things to consider, which helps reduce overwhelm.
When you find yourself prioritizing a long list of features, let that raise a red flag; this usually indicates a lack of strategy and understanding of real customer and market problems. Then stop that feature prioritization exercise, and work on those layers first. Do not cram lots of diverse features into a prioritization framework before you have prioritized the higher-level layers of the pyramid.
"Any features that are important shouldn’t need some multi-factor calculation to prioritize." - Jens-Fabian Goetzmann
RICE doesn't define Impact
The ‘I’ in RICE stands for Impact. But what is impact? RICE doesn’t define neither what the impact is, nor whom would be impacted. RICE is "...a structured method of comparing many different project ideas in terms of their potential impact on a single goal...” (emphasis mine) . (Sean McBride quoted in Roadmunk). In Sean’s case, they defined impact as ‘conversions’.
The problem with ‘impact’ is that not all impacts are equal. This method works well when you compare apples to apples but fails when you compare apples to oranges. A feature driving retention, as an example, may have a different impact from another feature that increases conversion. Furthermore, RICE doesn’t predefine for you whether the impact is business impact or customer impact (Vivek Kumar)
Before feeding anything into RICE, you need to prioritize one goal and define whom is expected to be positively impacted by the idea.
Guesstimates provide false precision
Prioritization frameworks all have qualitative assessments as inputs. These go into an equation. The inputs are subjective “guesstimates” that you turn into a number. You plug those numbers into a spreadsheet and calculate a total score. With RICE you guesstimate Reach, Impact, Confidence, and Effort. This may look analytical, but deceivingly so; it provides false precision
In the words of SK; there is no margin of error (MoE) incorporated in the frameworks. And if you would include some MoE on each value you end up with a large total margin of error on the resulting score. This is because when you multiply factors with MoE, the total MoE is the sum of the individual ones. E.g. a +/- 10% MoE on each factor gives a total of 40% for the resulting score.
Comparing features with scores that each contain huge margins of error makes it impossible to use in meaningful ways. At least I did not want to use them, because that would mean that I was ignoring the high levels of uncertainty. False precision severely reduces my confidence in the usefulness of the resulting prioritization. In other words, it made me uncomfortable. Disappointed, I closed my RICE spreadsheet.
One can improve the estimates, but this only goes so far. For example, me being a PM that wasn’t a developer before, I know that I can improve the guesstimate for Effort by asking my tech lead to join me in guesstimating - their input will be better than mine. However, there's real limits, as with software development there's work you foresee and work you can't foresee. The latter category is the majority of work. This severely impacts your ability to guesstimate for Effort when work hasn’t started yet.
In RICE, the ‘C’ stands for Confidence, which lets you input a guesstimate for your confidence in your other guesstimates. However, overconfidence bias also applies to guesstimating confidence, and therefore this additional score does not help much.
The frameworks are numbers-based and therefore have an analytical feeling to them. However, the source of every number is a subjective guesstimate. When the frameworks do not incorporate uncertainty, and most don't, the numbers provide false precision. When unchecked this makes it more likely to fall prey to overconfidence bias.
One helpful use case of prioritization frameworks
According to SK, prioritization frameworks have one helpful use case: they can be used as a communication framework. For example, in discovery exercises with customers. Or to communicate to stakeholders why something isn't prioritized (JFG). The framework is a tool to structure the conversation. It helps you to help understand why the customers believe something is very important - or not so much. Similarly, it can help you explain to others why something isn’t important.
Using RICE in discovery with customers improves on one aspect of how prioritization frameworks typically get used; with stakeholders and the product manager putting guesstimates into a spreadsheet. Here, the customer is the one providing the input, which is a more solid source of data. SK warns that the customer statements should not be used to prioritize any work. Rather than translating their subjective statements into numbers, it provides rough, broad-strokes insight into "higher" vs. "lower" value/impact etc.
Conclusion
When you find yourself cramming a large number of diverse features into a prioritization framework like RICE; stop and realize that there's work to do in a higher layer of the prioritization pyramid. Stop prioritizing features and work on those higher layers first.
Then, when you get down to the Solutions layer, you should define the goal for Impact and who is going to be impacted. Then you can use RICE as a conversation facilitator with customers, to get their rough value indications. Think: "high" versus "low". Don't use RICE to prioritize any work. Due to uncertainty there will always be false precision in the numbers.
The decision on what should be the priority (1) ever remains a subjective one; I believe this is one of those fundamentally messy aspects of product management. However you should not need RICE to prioritize features. In the words of JFG: "Any features that are important shouldn’t need some multi-factor calculation to prioritize.". Their importance reveals itself through work done on the higher layers of the pyramid.