The Consensus Method Part 1: Having Better Product Discussions

Editor’s note: the following was written by a guest blogger. If you have product management/tech industry experience, and would like to contribute to the blog, please contact [email protected]

“What to do first?” – this question bothers all tech companies each time they plan upcoming projects. Every course or textbook for product management will include some sort of best practices for how to handle this. Though, the nature of such practices is that they are subjective and opinion-based. Some of the practices use assessments of impact and effort to create order within projects, however, they tend to stay pretty high level. Breaking down the effort vs impact opinions into detailed assessments could make your discussions and decisions much more productive.

A company I worked for struggled with exactly that. Opinions varied concerning what was the actual gain from each project and which constraints should be considered. 

This lack of clarity made the company’s productivity take a hit.

In this 2-part article, I want to share the method that my colleagues and I developed to help this company achieve a 34% improvement of their KPIs by driving their discussions to be more productive and cohesive. It’s one that I have found helpful throughout my career, including in my current role as VP Product and Operations at Rivr by Simplaex.

This first article explains i) what differentiates our method from others, ii) how we clearly described the product we were improving iii) and how we created a powerful tool for problem indicators. The latter was used as the first step of transitioning our discussions to an efficient process.

The second article will go over how i) we implemented a similar tool to measure projects’/initiatives’ impact, ii) how we’ve included all types of efforts/constraints in our prioritization iii) and how these enabled productive discussions and decision making.

Do we really need another prioritisation system?

Most articles on the subject are intuitive and usually are based on an assessment of effort vs impact, maximising impact with a limitation on effort. The issue is that each subjective assessment relies on packing the thoughts and opinions of whoever happens to be responsible for process execution. Different team members or stakeholders have different opinions, and even a single person can find multiple conflicting arguments for different evaluations of impact and effort. The result is a variety of possible priority lists and possible frustration and disagreements.

The method we’ve built is also based on the impact vs effort concept, but instead of producing scenarios where it’s “me against colleagues’ opinions”, it’s “me and my colleagues against the numbers” (hence the name – The Consensus Method) . It does so by:

  • defining more precisely the product being addressed;
  • presenting the holistic effect each project has – exploring the “impact” more deeply;
  • recognising that sometimes there are many types of constraints – exploring the “effort” more accurately;
  • keeping the impact and effort assessments separate, and not inferring a single “score”.

The last point is key here. This method does not produce a clear ordered list of projects. It rather produces a more detailed assessment of the impacts and the efforts of each, and more importantly, fertile ground for concise discussions and quickly reaching consensus.

Throughout the article I’ll use the example of a product that is a repetitive process – a user conversion process. This was the product that the company was trying to improve, and that we applied this method for.

If you choose to implement this method, I strongly recommend doing so in an editable spreadsheet (build your own, or use this template). You can then play around with the different levers that will pop up during your thought process, and it will allow you to better understand your own and your team’s assumptions and their effect on the overall result.

Defining the product and its stages

We started by expanding our product into linear independent stages. For a user to convert we can say that he/she needs to pass the following steps:

  • “marketing” – discover our service;
  • “visit” – reach our platform;
  • “engage” – seek details about the value we offer and reach the “subscribe” section;
  • “subscribe” – provide personal details and consent.

To best describe our product’s stages, we thought of each stage as an independent gate in the process – where we could clearly define when a user reached it and when he/she passed it.

For each stage we defined a weight representing the potential we thought a stage had in improving the ultimate goal of conversion rate. For example – we had metrics showing that many users didn’t pass the “visit” stage and that a similar drop rate occurred in the “subscribe” stage. However, the company’s experience clearly suggested that there is more potential for reducing the drop rate in the earlier stage. For that, we gave a greater weight to “visit” than to “subscribe”.

Chart with stages with weights

Note that without weights the list of stages can be misleading – you could break the product/process into more stages, and thus change the relative importance of each.

What are the core problems the product suffers from?

After describing the user conversion process, we moved on to map the problems and pains this product had. I won’t go into how we’ve run this mapping process, suffice to say that a clear discussion around the issues that impeded our KPIs, while setting aside the existing list of projects, helped us make sure that we were focusing on the things that mattered the most to our success.

We mapped a long list of problems that we suspected or had evidence for that they were hindering the user conversion. Some examples: we experienced high latency of our homepage in some devices, and that visually impaired users were dropping off at a much higher rate.

Confronting the problems up-front revealed issues that were previously unidentified. For example – data indicated that if users were shown unfit ads they valued the platform lower than other users, and hence had less chance of converting to subscribers. None of the existing projects seemed to address this.

You might also be interested in: Want to Double Your ROI? 3 Ways to Become More Insights-Driven

The effect of each problem on each stage

We aimed to build a tool that would allow us to provide common ground for criteria and terminology. For that, we now asked the different teams to assign a severity number to each problem and stage – “how bad is it if the problem occurs?”.

We used a scale of 0-10 that accounted for the effect a problem would have on the ratio of users passing a stage. For example – our high latency problem had a great (negative) effect on the “visit” stage (users would close the tab while only half of the page was loaded), though if a user had reached the “subscribe” section the latency did not pose a big issue.

You might also be interested in: Pre Mortems – Seeing The Problems Before They Happen

We found it useful to focus on a single funnel stage at a time, go over all listed problems and assign them severity scores for this stage. That created a feedback-loop where we had one stage in mind at one time and compared our assessments for different problems. By doing so we kept the fact that our assessments were still subjective in check. When there were major disagreements on the severity values we either took the average, or kept the different assessments and showed their effect on the overall result.

To converge this table of numbers to a score for each problem we used the following formula: score = 1-stages(1-x/10)w/sum(w). Each severity score is transferred to a chance of passing the stage; the stage’s relative weight is used to steer this chance’s overall influence; the total score is the chance to not pass the whole funnel (assuming linearity of the stages). You can find a simple implementation of this in this template.

At this point we had achieved a powerful tool that served us as a guide to the problems we should address in any decision making. We had a manageable set of problem indicators all of which are derived from quantitative data and decisions which the team understands and agrees with.

This scorecard provides an understanding of the problems’ different tiers, and what’s more, it lays bare the assumptions and beliefs in the components of the table.

We could now have a concrete discussion regarding each of the problems and test different subjective-assessments and their effect on the score. We could play around with the table and ask questions like:

  • Does the high latency problem really have such a negative impact on the “visit” stage? Where in fact it did not matter – lowering it did not change the score much.
  • Does the high potential (weight) we attribute to the “engage” stage dictate the results, or is it somewhat invariant to it? Also here the debate could be shrugged off – having equal weights kept the scores similar.

The result was much smoother and more effective discussions. We had a common language and could speak in shared terms. It was us against the numbers.

Coming in part two

In the second part of the article we will see how to uncover more assumptions and use them to measure potential projects’ impact. And although a unified impact score is nice, it should be countered with an effort/constraint measurement, which can range from developers’ time to users’ adaptation requirements. Read the next part to also learn how we measured potential projects’ impact and assessed effort from varied constraints.

About the Author

Eran Udassin

Eran Udassin is a passionate product manager currently leading the R&D team for Rivr as VP Product and Operations. A creative thinker and apt problem solver, Eran enjoys sharing knowledge and diving into new strategic and technological subject areas.

Product Podcast CEOs

Enjoyed the article? You may like this too: