The Consensus Method Part 2: Building An Actionable Project Plan

Editor’s note: the following was written by a guest blogger. If you would like to contribute to the blog, please review the Product Blog contribution guidelines and contact [email protected]

Read: The Consensus Method Part 1: Having Better Product Discussions

“We had a common language and could speak in shared terms. It was us against the numbers” – This was the key takeaway in Part 1. In it we showed you how to break down a product into linear stages, use these stages to invoke a clear dialogue regarding what are the biggest problems you face, and how that results in much smoother and more effective discussions.

In this second part we will explain how to prioritise projects efficiently. To complement what we’ve learned in the first part we add two further elements to the The Consensus Method:

  • how to measure potential projects’ impact (using similar tools to those we used to measure problems’ scores);
  • how to combine different constraints assessments, such as developers’ time and users’ adaptation requirements.

As a reminder – We have used this method to improve a company’s product of user conversion process. Using it we were able to create an actionable plan where that increased the conversion rate by 34%!

Marketing - Visit - Engage - Subscribe

You might also be interested in: Delight Customers in Margin-Enhancing Ways

From problems to actionable projects

We mapped a long list of projects that the team wanted to take on, or that stakeholders had requested. Some of these projects were already in a backlog list when we started this endeavour, and some were the fruit of discussions we had regarding the list of problems. Some examples:

  • Caching internal queries was in the backlog for quite some time. This would speed up our backend’s response time and directly tackle the high latency problem.
  • Our discussion had surfaced the issue of unfit ads creating a higher bounce rate. The team had proposed to change the advertisement partners’ integrations, tackling the ads problem directly, and also potentially improving the latency.
  • To address our website not suiting to visually impaired users a new graphical version to assist with legibility was suggested.

Projects that were proposed, but that had no relation to any of the listed problems, were put away for the time being. These would be addressed in an innovation and exploration scope, rather than project prioritization.

The climax of part 1 was a table that put on display our assumptions and beliefs regarding problems we had concluded were impeding our product. We used severity numbers depicting the effect each problem had on our product’s different stages in terms of users’ ratio passing through the funnel.

Besides facilitating our discussions the table also provided a score and hence an ordered list of the different problems. In our path to efficient project prioritization we wanted to create a similar table that would focus on projects – a scoreboard depicting our projects’ assumptions and one that would derive a scored list.

We assigned a severity number to each solution, stage and relevant problem – “in case we accomplished the project, how bad would it be if the problem occurs?”. For example – Changing the ad-partners’ integrations was thought to have great potential for negating the effect unfit ads had on the “visit” stage (bad ads created a high bounce rate), though only a small impact on the “engage” stage (users that reached deeper pages on our platform were less affected).

Using the same scale as used for scoring problems, we gained quantitative judgments for projects’ effect on each problem and stage.

With a few steps we created a project’s impact score:

  1. We used the formula we introduced in part 1 to converge the severity numbers and get revised problem scores;
  2. set each project’s effect on a relevant problem as the difference in the problem’s score (with vs without the project);
  3. and the total project’s impact was taken as the sum of the effects on the different problems.

You can find a simple implementation of this in this template.

The projects’ table now served as a scoreboard calculated from joint assessments and opinions. We had a mangable collection of levers representing and derived from the collective knowledge of the team and stakeholders. Consensual prioritization was in our reach.

You might also be interested in: Common Product Prioritization Mistakes

Each subjective concern, or change of opinion, could be approached using the above table, measuring the change in the impact scores.

The result was smoother and more effective discussions. We had a common language and could speak in shared terms. It was (again) us against the numbers.

Constraints and resources

So far we have harnessed collective knowledge and experience to assess what hindering problems the team should address, and on top of that we’ve built a way to get impact scores for our potential projects. The missing piece for prioritisation is to answer the question – “how much of all of this can we actually do?”. Assessing the effort.

The main worriment we had was that more than 1 resource had to be taken into account. Developers’ time is often regarded as the sole (or main) constraint/resource that a tech company should worry about allocating. However, in discussions with the team and different stakeholders, we found that other constraints and resources were presented as reasons for demoting/promoting certain projects.

For example – “how much does a project disrupt existing users?” or  “how much alignment and support are required from other teams or departments within our organisation, over whom we lack direct control?”.It was not obvious how to compare developers’ time to the churn and lower engagement resulting from users not adapting to interface changes.

[If this complexity is not one that troubles you, feel free to jump to the next section]

The way we addressed this was to surface “unwritten rules” that accounted for reasons to see certain projects as overall-cost-heavy or overall-cost-light. We worked with a simple “shirt-size” (S, M, L) metric to render the total effort any project would require.

When someone voiced an opinion regarding a project being easy/hard, we tried to generalize the reasoning and reach a consensus regarding the logic behind it in the form of this shirt size. From there, we reconciled the overall effort of each project. 

To simplify the example I’ll explain using only 2 constraints – developers’ time and user adaptation. We broke the constraints into shirt-sizes, and had to agree on the total effort for the 9 possible combinations (2 constraints with 3 possible efforts).

We asked questions like – “If user adaptation is assessed as L, should the whole project be considered L?”. If the consensus was yes, it meant we are able to simplify the discussion around the constraints, leaving us with only 6 combinations to agree on.

After a few simplifying questions we were able to reach a consensus for the projects’ effort in a single metric that embodied most, if not all, concerns.

Developer's Time vs User Adaptation

Next, we assessed our proposed projects’ effort. Given the above consensus we gave a shirt-size assessment to each project in all 3 constraints. Many of the disagreements regarding these assessments were dismissed using the agreed total effort table. For instance: If the team could not agree on the required development time needed for a certain project we could use the table to check the consequence of the discrepancy – often it was non-existent.

Impact-Effort chart

Hurray! We have achieved impact-effort assessments for our projects. Putting them on a chart is the next logical step.

Impact v Effort

Using the impact and effort tools together with the visual presentation gave the company’s discussions a much more constructive nature. Everyone who was involved and wanted to voice his/her opinion (team members and company stakeholders) could express their viewpoint through the different assessments and notice if there was a meaningful change in the chart. Understanding why some projects appeared in more preferable areas of the chart created an efficient exchange of views and ideas.

We have decided to not collapse the impact-effort scores to a single value. It proved that given The Consensus Method, the final chart, and more importantly the agreement on many concerns and assessments leading to it, was enough to reach project prioritisation.

The team agreed on which projects should be shelved for the time being and which should get high priority. The order of actual implementation was not a source for much debate, and the product steamed ahead.

The company created short and mid-term plans and was able to move the needle for its product using projects that shined through people’s shared knowledge. Most importantly, following the agreed plan helped the company increase its conversion rate by 34%!


Even when data is abundant, subjective opinions would always be a part of planning and prioritisation, and these can turn to a dialogue of the deaf. Why not move to a more constructive discussion?

The Consensus Method offers a path to harness the shared experience and knowledge of the people who best know the ins and outs of your product. The method offers a way to take all the data, subjective-assessments and opinions and channel them to productive and concise discussions.

Have you tried out The Consensus Method? Have you used competing methods that I can learn from? Feel free to share your experience with me directly at [email protected]

Meet the Author

Eran U

Eran Udassin is a passionate product manager currently leading the R&D team for Rivr as VP Product and Operations. A creative thinker and apt problem solver, Eran enjoys sharing knowledge and diving into new strategic and technological subject areas.

The Product Podcast

Enjoyed the article? You may like this too: