Prioritization of work items following
the Kano method

Prioritization of work items following the Kano method

Prioritization of work items following the Kano method

Since 2017 the idea behind IBM Watson® Knowledge Catalog is to combine all relevant data governance offerings from IBM within one service. In 2021 we started to add one of the most important capabilities: Data Quality. Our product team had several priorities identified for 2022 but we had no idea where to start and what is most important for our customers. So together with our project managers we created a list of potential add-ons and decided to tun a prioritization study with legacy customers, customers of our latest version of Watson® Knowledge Catalog, and prospects.

FEB. 2022 | 5 MIN. READ

AUTHOR: Robin Auer, User Reasearch Lead, IBM Data and AI


Kano studies are feature prioritization surveys that provide quantitative and qualitative data that can be used to determine where efforts should be focused. The result of this study is recommended prioritization for short-term, mid-term, and long-term. We started with a list of 19 capabilities which were shown in the survey with a title, a description, and a screenshot from the tool to make clear what we are referring to when we e.g. talking about the “ability to assign DQ dimensions to rule definitions and rules”.

All Features were then categorized into the following six different categories based on answers and combinations:

  • Must-have: Simply expected by our users. If we don’t have them, the product will be considered to be incomplete or just a pain.
  • Performer: The more we provide of this function, the more satisfied our users become.
  • Attractive: Unexpected features which cause a positive reaction. 
  • Indifferent: Presence or absence doesn’t make a real difference.
  • Reverse: Users are clearly not interested in the feature and perhaps actually want the opposite. 
  • Questionable: Conflict responses.

The grouping was based on the participants' answers to a pair of questions about each feature:

  • Functional Question: How would you feel about having this feature?
  • Dysfunctional Question: How you feel about not having this feature?
  • Importance rating based on a scale from 1 to 5 (most important).
  • An open response for comments, concerns, etc.

The evaluation table (shown below) combines the functional and dysfunctional answers in its rows and columns to get to one of the Kano categories mentioned above. Every answer pair leads to one of the categories.


Kano results can be interpreted in different ways through:

  1. Continuous analysis
  2. Discrete analysis
  3. Importance score
  4. Qualitative data analysis

We decided to gather the importance as well as qualitative feedback for each of the features. Finally, we used the continuous analysis method to analyze the survey responses. This means, each option is translated to a numerical value within a satisfaction potential scale, from -2 to 4. The bigger the number, the more an answer reflects how much the customer wants the feature. The scores lead to the categorization in the graph. to finally prioritize the functions, sort all relevant ones into a graph with four quadrants. These quadrants refer to the evaluation table presented before and behave as follows for the prioritization:

  • Indifferent functions make no difference and should be deprioritized.
  • The must-haves are expected by the users and should be all included.
  • As more performer are included, the better. 
  • The attractive are unexpected delights. So we should include some of them in our next steps in order to differentiate from other competitors.

Key Insights

We ended up with no “Must-haves”. Most of the capabilities were ranked either as performer or attractive. 11 out of 19 features were determined to be Performers. The more of these we prioritize, the more user will enjoy and benefit from them. 7 out of 19 features were considered to be Attractive. Including these features would provide an experience that our users know and expect. These features are to be prioritized after Must-Haves or Performance. 1 out of 19 features was identified as being Indifferent. So this feature will have a low impact and should not be prioritized over others.


Based on the satisfaction and the importance rating, we identified the top three for both groups. We mentioned the indifferent capability as a capability we should not focus on. Finally, we recommended prioritizing the most important performers and a few attractive ones. We used a prioritization pyramid (shown below) to explain this to our stakeholders. The graphic divides the capabilities into functional “Must-haves”, reliable and usable capabilities which are part of the “performance”, and last but not least the delightful capabilities which are “attractive”. 

All Participants (N=33)


Through the participation of IBM employees, we were able to analyze the internal view of our customers. Out of 19 functionalities, we had a total of 5 where the IBM view was very different from our customers. Such analysis and results help us to reveal and point out our wrong perspectives. 

IBM employees (N=5)



Because of this research activity we were able to make the final call on how to structure our 2022 roadmap to migrate capabilities from our legacy offering to the latest version of IBM Watson® Knowledge Catalog. Not only was this very important in making the right decision based on user needs, but it also created a lot of trust between user researchers and project managers. We were able to help to make the right decisions in a complicated situation. Such decisions are often bets that can be made more comfortable with user input.


After the research study, we took a closer look at individual functionalities and started talking to customers again. We wanted to understand what can be improved during migration. Of course, we don't want to repeat old mistakes.