Product School

The Kano Model: Prioritizing Features That Delight

Carlos headshot

Carlos Gonzalez de Villaumbrosia

Founder & CEO at Product School

December 28, 2025 - 19 min read

Updated: December 9, 2025- 19 min read

Not all features are created equal. You could spend months building what you think customers want, only to launch and hear crickets. Meanwhile, a competitor adds one small feature, and users lose their minds with excitement.

When it comes time to prioritize features, it’s critical to understand that not all are created equal. Some are features that customers expect, some they appreciate, and some make them fall in love with your product.

This is exactly what the Kano Model reveals. In this piece, you’ll see how the Kano Model helps you stop guessing which features to build next and start making strategic decisions that turn satisfied customers into devoted fans.

Product Prioritization Micro-Certification (PPC)™️

This free course introduces three essential frameworks and shows you how to apply them to real-world scenarios.

Enroll for free
Product Prioritization Micro-Certification Thumbnail New

What Is the Kano Model?

Kano model glossary

The Kano model is a framework for understanding how different product features impact customer satisfaction, so teams can prioritize what to build based on what will truly matter to users. 

In practice, Kano analysis helps product teams see:

  • Which features are basic expectations

  • Which drive satisfaction in a linear way 

  • Which actually delights people when they show up in a product.

When people talk about the Kano method or Kano model analysis, they are usually talking about the same thing: a structured way to classify features based on how customers feel when those features are present or absent, then using that insight to make smarter product roadmap decisions.

Origin and core concept of the Kano model of customer satisfaction

The Kano analysis was created in the 1980s by Professor Noriaki Kano. He studied why some product improvements created loyal, happy customers while others barely moved the needle. 

His key observation was simple but powerful: customer satisfaction is not linear. 

Doubling a feature’s “performance” does not automatically double satisfaction. Some features just need to exist at a minimum level, others scale satisfaction as the user gets better at using the product, and a few create disproportionate delight.To make this usable for product teams, Kano grouped customer expectations into categories like:

  • Basic or must-be attributes that users take for granted

  • Performance attributes where “more is better.”

  • Attractive or delightful attributes that pleasantly surprise users

  • Indifferent and reverse attributes that don’t help or can even hurt if overdone

This is the foundation of Kano analysis. Not all features are equal in the way they drive satisfaction, so they shouldn’t be treated as equal on the outcome-based roadmap.

How is the Kano Model used in Agile?

In Agile organizations, the Kano Model is used to categorize backlog items into must-haves, performance features, and delighters so you can decide what absolutely needs to make it into near-term sprints versus what can wait. 

It helps product managers and squads focus iterations on fixing basic expectations first, then improving core performance, and only then sprinkling in delight features that differentiate the product.

Why Kano analysis matters for product teams

If you’re working in product management, you already juggle multiple prioritization lenses. Think revenue, cost, risk, AI product strategy, technical constraints, and so on. The Kano method doesn’t replace those lenses. It adds a missing one: how a feature affects user satisfaction emotionally, not just functionally.

Used well, the Kano model analysis helps you:

  • Avoid shipping features that consume engineering time but leave users indifferent

  • Protect fundamentals that nobody will thank you for, but everyone will complain about if they break

  • Intentionally invest in a small number of delight features that change how users talk about your product

For product leadership, it is also a neat way to align stakeholders. Instead of arguing feature by feature, you can say “this is a must-have,” “this is a performance driver,” or “this is a delighter we’ll only do if we have capacity,” and attach this to survey data rather than gut f

Another nuance that smart teams care about is that the attributes move. What is a delighter today often becomes a performance expectation tomorrow and a basic requirement later on. Think of mobile banking, in-app chat support, or dark mode. This dynamic aspect is built into the original Kano model and is one of the reasons it still feels relevant in modern product work. 

We’ll discuss this in more detail below.

Kano model analysis example in practice

At a high level, the Kano method is a structured way of asking customers how they feel about potential or existing features, then turning those answers into categories you can use for product prioritization.

The basic loop looks like this:

  • You pick a set of features or ideas to evaluate.

  • For each feature, ask customers two questions:

    • A “functional” question: how they feel if the product has that feature

    • A “dysfunctional” question: how they feel if the product does not have that feature

  • Customers answer each question on a five-point scale such as “like,” “expect,” “neutral,” “can tolerate,” “dislike.”

  • Combining each pair of answers into a category using a standard evaluation table (for example, “I expect it” if present and “I dislike it” if absent usually means it’s a must-have).

  • Aggregating results across respondents to see which category dominates for each feature.

From there, Kano model analysis usually feeds into your existing prioritization flow:

  • Must-have features become non-negotiables that need to reach a reliable baseline

  • Performance features become levers that can be tuned when the goal is to win on quality or speed

  • Attractive features become strategic “wow” moments sprinkled where they support product positioning

Some teams also compute simple satisfaction and dissatisfaction indices for each feature. This quantifies how much satisfaction is gained if a feature is implemented versus how much dissatisfaction is avoided. 

That gives product organizations a more numerical way to stack-rank options while staying rooted in user sentiment.

When to use the Kano model

The Kano model is most useful when you’re making bigger product direction calls, not micro-optimizations. 

In practice, teams get the most value from Kano early in product development or at key inflection points (prioritizing an AI MVP, reshaping a mature product, or preparing a major roadmap reset). It’s best done when you need to understand which features are non-negotiable, which steadily drive satisfaction, and which are worth betting on as true delighters.

It’s especially helpful when you:

  • Have a long backlog and limited engineering capacity, and want to avoid investing in “nice ideas” that won’t move satisfaction or NPS.

  • Want to tie roadmap decisions to customer sentiment rather than internal opinions, using lightweight surveys instead of heavy research programs.

  • You are evaluating new product opportunities or competitive gaps and need to distinguish table-stakes from differentiators.

  • Work in Agile and want a customer-centric lens to feed into backlog refinement and sprint planning alongside RICE, MoSCoW, or WSJF.

By contrast, Kano is usually overkill for tiny UI tweaks or low-stakes decisions; in those cases, faster scoring frameworks are enough, and you can reserve Kano analysis for the moments where understanding emotional impact would genuinely change what you choose to ship.

Kano Model Categories (With Simple Examples)

Once the product teams understand the basic idea behind the Kano model, the next step is to get comfortable with its categories. This is where Kano analysis gets practical. You stop treating every feature the same and start asking a better question: “If we ship this, what kind of satisfaction are we actually buying?”

At a high level, the Kano method splits features into a few groups:

  • Must-be (basic expectations)

  • One-dimensional (performance)

  • Attractive (delighters)

  • Indifferent and reverse (things that don’t help, or even hurt)

Let’s walk through each category with examples teams can recognize from real products.

1. Must-have (basic expectations) of Kano prioritization

Must-be features are the ones users rarely mention in interviews, but complain about instantly when they break. Think about:

  • A banking app that doesn’t lose the customer’s money

  • A SaaS product that loads reliably and saves work

  • A signup flow that doesn’t leak passwords or crash halfway through

When a must-be feature works, nobody throws a party. Satisfaction doesn’t really go up as investment in it increases. But if the baseline is missed, dissatisfaction spikes quickly.

From a Kano model analysis perspective, these are non-negotiables:

  • Teams don’t compete on them; they simply have to meet the bar.

  • No one expects thank-you emails for them, only the absence of angry ones.

  • Cutting corners here is almost always a bad trade, even if it feels efficient in the short term.

In roadmapping terms, must-be features are the foundation. You prioritize them not because they excite anyone, but because any crack at this level undermines everything else.

2. One-dimensional (performance) features

Performance features are where “more is better” in a fairly linear way. You feel these directly in key metrics and user feedback.

Common examples:

  • Page load time in a consumer app

  • Report generation speed in product analytics tools

  • Search accuracy and relevance

  • Storage limits or API rate limits in a developer product

If you invest in these, satisfaction usually rises proportionally. Faster, more accurate, more generous, more flexible, and users feel it.

In the Kano model prioritization, these are often your competitive levers:

  • They translate nicely into product OKRs: faster by X%, more accurate by Y%.

  • You can benchmark against competitors and market expectations.

  • Stakeholders understand them intuitively and will happily argue about them in roadmap meetings.

The risk is that performance features can turn into a treadmill. You keep tuning them without ever stepping back to ask, “Is this still where the next unit of satisfaction comes from, or are we just polishing for marginal gains?

3. Attractive (delighters) in the Kano diagram

Attractive features are the fun ones. Users don’t expect them, and they rarely ask for them explicitly. But when they appear, they create an outsized sense of delight.

Think of:

  • A SaaS tool that auto-imports data from a competitor in one click

  • A product that onboards with realistic demo data, so it feels useful on day one

  • A mobile app that works perfectly offline when teams didn’t even know they needed it

If these are absent, customers are not angry. But when they are present and well executed, satisfaction jumps sharply.

In Kano model analysis, delighters are strategic:

  • They’re powerful differentiators in crowded markets.

  • They give marketing and sales something memorable to talk about.

  • They shape the stories users tell other people about the product.

The nuance here is that attractive features are expensive if you treat them like decoration. For product management, the goal is not to add “cool stuff” everywhere, but to place a small number of delighters where they reinforce your core value proposition.

4. Indifferent and reverse attributes in Kano chart

Indifferent attributes are features users simply don’t care about. Reverse attributes go a step further because they actually reduce satisfaction when you add them.

Indifferent looks like:

  • A long list of theme options in a B2B admin panel when users just want a clear default

  • A highly configurable dashboard for a segment that barely logs in

Reverse can look like:

  • Aggressive animations that slow down power users

  • Auto-playing videos in a product built for quiet office environments

  • Extra “security” prompts that feel like friction without real benefit

In Kano analysis, these are the quiet killers of focus and velocity:

  • Indifferent features waste engineering effort without moving satisfaction.

  • Reverse features actively undermine the experience for core users.

They’re also a reminder that “more” is not always better. The Kano method helps you say no not only to weak ideas but to ideas that may be good in isolation and still wrong for your audience or context.

How Kano categories shift over time

This is where the Kano framework gets interesting for product leaders. Categories are not static, and great teams know it, so they shuffle and get creative.

What used to be a delighter can become a performance expectation, then a must-be:

  • Mobile banking used to be a wow moment. Now it’s table stakes.

  • Two-factor authentication started as a delighter, moved into performance territory, and in many industries is now a basic requirement.

  • Dark mode was once a pleasant surprise; today, in many tools, it’s almost assumed.

That means a one-off Kano survey is useful, but not enough. The real power of the Kano method comes when you:

  • Revisit key features as your market and user base mature

  • Watch how expectations shift across segments (new users vs. power users, SMB vs. enterprise)

  • Combine Kano insights with behavioral data to see whether “delight” on a survey translates into product adoption and user retention

In other words, Kano analysis is less about pinning features into boxes forever and more about tracking how customer expectations evolve and making sure your roadmap evolves with them.

How to Run a Kano Survey

Running a Kano survey sounds more complicated than it actually is. The nice part today is that AI tools can take a lot of the heavy lifting out of the process: from drafting feature statements to generating question wording and even cleaning up responses. 

Here’s a quick how-to for AI-native teams (or those looking to become one) tackling the Kano model with the use of generative AI tools.

Decide what you want Kano to tell you

Before any questions are written, it’s important to be clear on what decision this Kano model analysis should help inform.

Is the goal to prioritize a long backlog of ideas, decide what belongs in an MVP, or re-think a mature product’s roadmap? The answer changes which features are included and which users are asked.

This is a good moment for product managers to use ChatGPT or other generative tools. They can:

  • Paste an outcome-based roadmap or backlog into an AI tool and ask it to group ideas into themes

  • Ask it to surface “hidden” features implied by support tickets or user feedback pasted in

  • Get suggestions for which segments or personas should be targeted first

The team still decides what goes into the survey, but the process no longer starts from a blank page.

You still decide what goes into the survey, but you are not starting from a blank page.

Write clear feature statements (with AI as your co-writer)

The biggest silent killer of a Kano survey is unclear feature wording. If users don’t fully understand what you mean, the categories you get back are noisy.

Here is a simple rule. Each feature statement should describe one clear outcome, in language your users would actually use. Not “AI-driven contextual journey orchestration,” but “The product automatically suggests the next best step based on what you did before.”

This is where AI tools can be extremely useful:

  • Draft rough feature ideas in shorthand, then ask AI to rewrite them in user-friendly language for a specific persona.

  • Paste real user quotes and ask AI to turn them into feature statements that mirror the customer’s own words.

  • Request multiple variations of the same feature statement and select the one that feels clearest for the team.

Once a set of feature statements is ready, run them back through AI and ask: “Where might users misunderstand this? What assumptions are being made?” AI will often point out jargon, ambiguity, or missing context that product teams are too close to see.

Ask functional and dysfunctional questions

The heart of the Kano method is the question pair you ask for each feature:

  • A functional question: “How would you feel if the product had this feature?”

  • A dysfunctional question: “How would you feel if the product did not have this feature?”

AI can help in a few ways here. It can generate several versions of the functional and dysfunctional questions so they feel natural in the product’s tone of voice.

AI can also assist with adapting the answer scale. For some audiences, the classic phrasing feels stiff, so a more conversational version that still maps clearly to the five options can work better—AI can draft those variants, and the team can sanity-check them against the standard Kano evaluation table.

The key is to keep the structure of Kano questions intact, while letting AI handle much of the microcopy work and localization.

Collect, classify, and learn fast

Once these items are ready, the survey is sent to target users and responses are collected:

  • A list of clear feature statements

  • Functional and dysfunctional questions for each

  • A five-point answer scale that makes sense to the target audience.

On the analysis side, the classic Kano model analysis process still applies: combine each pair of answers, classify them into categories (must-be, performance, attractive, indifferent, reverse), and see which category dominates for each feature.

Here again, AI can speed things up without turning the process into a black box:

  • Teams can feed it a sample of response pairs and ask it to classify them using the evaluation table, then double-check the mapping once before running it on the full dataset.

  • AI can also summarize patterns by segment—for example, which features are must-be for power users but only performance drivers for new users.

  • If the survey includes open text fields (“Why did you choose this answer?”), AI can cluster those comments into themes and surface language that can be used in future feature statements.

Scoring, Indices, and Mapping of the Kano Model

Once you have your Kano survey responses, the next job is to turn them into something your product roadmap can use. This is where scoring, indices, and simple visuals come in (and again), AI can quietly cut a lot of manual work.

Turn responses into Kano categories

Each feature has two answers per respondent (functional and dysfunctional). You combine those two using the standard Kano evaluation table to get a category: must-be, performance, attractive, indifferent, or reverse.

In practice, you:

  • Map each pair of answers to a category using the table

  • Count how many times each category appears per feature

  • Assign the feature the category with the highest count

AI tools can help by doing the mechanical mapping for you. You feed it the evaluation table and a sample of response pairs, check that it classifies them correctly, then let it run over the full dataset. This saves you from building and debugging custom formulas for every new Kano analysis.

Use indices to rank features

Categories are helpful, but they still leave product teams with a long list of “performance” or “attractive” features. That’s where satisfaction and dissatisfaction indices come in.

For each feature, teams calculate:

  • A satisfaction index: how much satisfaction is gained if the feature is implemented

  • A dissatisfaction index: how much dissatisfaction is avoided by implementing it

Both are based on simple ratios of category counts (for example, attractive plus performance over all responses). The exact formulas vary slightly by source, but the idea is always the same: put a number on the emotional upside and downside of a feature so they can be compared more objectively.

AI is useful here for two things: generating the actual formulas for a spreadsheet or BI tool, and checking the math. Aggregated counts can be pasted into an AI tool, asked to compute indices, and the results cross-checked against internal calculations.

Map it visually for your stakeholders

Finally, you want a view that makes sense in one slide.

A common approach is to put satisfaction on one axis and dissatisfaction on the other, then plot each feature as a point. Teams can instantly see:

  • Which features are highly satisfying and highly dissatisfying (strong candidates)

  • Which are only mildly impactful

  • Which look exciting on paper but don’t really move the needle

Here, AI can help teams experiment with different visualizations. It can suggest what kind of Kano chart best matches the data, how to label axes in a way executives will understand, or even generate sample code for the product analytics environment.

From Kano Survey to Roadmap

Once the Kano results are ready, the goal is not to stare at categories in a spreadsheet but to turn them into practical roadmap decisions. 

Start by protecting the basics. Anything that came out as a must-be and is still shaky goes into your “fix and stabilize” bucket. Then look at your performance features and decide where improving speed, reliability, or power will meaningfully move user retention, NPS, or expansion. 

Finally, pick a small number of attractive features that genuinely fit your product positioning and user needs, rather than sprinkling “nice-to-haves” everywhere. AI can help here by clustering features around outcomes, suggesting themes like “onboarding friction” or “workflow automation” so you are not prioritizing one feature at a time, but groups of work that tell a coherent story.

Product Roadmap Template

Download our easy-to-use template to help you create your Product Roadmap.

Get the Template
Product roadmap template asset icon

Kano analysis should also sit alongside other prioritization methods 

Once teams know which features are must-be, performance, or attractive, they can still run them through other prioritization frameworks like RICE, MoSCoW, or a simple cost–benefit logic to factor in effort, risk, and revenue impact. 

RICE scoring prioritization method

MoSCoW analysis example

AI is useful for stress-testing assumptions. Product teams can ask it to propose alternative RICE scores based on different inputs, flag hidden dependencies in the backlog, or surface similar features competitors have shipped and how they positioned them, while still keeping full control over the final call but gaining a faster, richer conversation.

The last step is closing the loop. After features ship, Kano-driven bets are validated with real usage data, support tickets, and qualitative feedback.

  • Did that “attractive” feature actually get used? 

  • Did stabilizing a must-be feature reduce churn or support volume? 

AI can help you mine product analytics and user comments for patterns much faster than manual review, but the discipline is the same. Update your understanding of what counts as basic, performance, or delightful as the market evolves, and feed those learnings into the next round of Kano model analysis.

The Future of Kano Analysis: Where AI Meets Customer Delight

The Kano model was revolutionary in the 1980s because it showed that customer satisfaction is emotional, not linear. But in today’s product landscape, AI is what makes Kano analysis powerful again. That’s why AI product managers are getting ahead.

Frank te Pas, Head of Product at Perplexity, said it best on The Product Podcast:

Every company should at least be an AI company. That's what turns your product into a future-proof product. AI becomes smarter. You want to be able to easily swap out parts of the process with it.

When the empathy of the Kano method is combined with the intelligence of AI, the roadmap becomes a reflection of how users’ needs are changing in real time.

Kano analysis provides the map, AI provides the compass, and together they help product teams navigate the messy, moving landscape of customer expectations.

Product Experimentation Micro-Certification (PEC)™️

The Product Experimentation Micro-Certification (PEC)™️ introduces you to the essentials of designing and running high-quality experiments.

Enroll now
Product Experimentation Micro-Certification Thumbnail


Updated: December 9, 2025

Kano Model FAQs

The three classic “customer wants” in the Kano Model are must-be (basic expectations that must exist), one-dimensional or performance (features where more is better), and attractive or delighters (unexpected features that create a disproportionate jump in satisfaction). 

Together, they describe whether a feature will simply prevent frustration, steadily increase satisfaction, or generate genuine excitement when users encounter it.


It is called the Kano Model because it was developed in the 1980s by Professor Noriaki Kano, a Japanese researcher in the field of quality management and customer satisfaction. The framework is named after him in recognition of his work on explaining why some product improvements dramatically increase satisfaction while others barely make a difference.

Subscribe to The Product Blog

Discover where Product is heading next

Share this post

By sharing your email, you agree to our Privacy Policy and Terms of Service