Since 2017 the idea behind IBM Watson® Knowledge Catalog is to combine all relevant data governance offerings from IBM within one service. In 2021 we started to add one of the most important capabilities: Data Quality. Our product team had several priorities identified for 2022 but we had no idea where to start and what is most important for our customers. So together with our project managers we created a list of potential add-ons and decided to tun a prioritization study with legacy customers, customers of our latest version of Watson® Knowledge Catalog, and prospects.
FEB. 2022 | 5 MIN. READ
AUTHOR: Robin Auer, User Reasearch Lead, IBM Data and AI
Kano studies are feature prioritization surveys that provide quantitative and qualitative data that can be used to determine where efforts should be focused. The result of this study is recommended prioritization for short-term, mid-term, and long-term. We started with a list of 19 capabilities which were shown in the survey with a title, a description, and a screenshot from the tool to make clear what we are referring to when we e.g. talking about the “ability to assign DQ dimensions to rule definitions and rules”.
All Features were then categorized into the following six different categories based on answers and combinations:
The grouping was based on the participants' answers to a pair of questions about each feature:
The evaluation table (shown below) combines the functional and dysfunctional answers in its rows and columns to get to one of the Kano categories mentioned above. Every answer pair leads to one of the categories.
Kano results can be interpreted in different ways through:
We decided to gather the importance as well as qualitative feedback for each of the features. Finally, we used the continuous analysis method to analyze the survey responses. This means, each option is translated to a numerical value within a satisfaction potential scale, from -2 to 4. The bigger the number, the more an answer reflects how much the customer wants the feature. The scores lead to the categorization in the graph. to finally prioritize the functions, sort all relevant ones into a graph with four quadrants. These quadrants refer to the evaluation table presented before and behave as follows for the prioritization:
We ended up with no “Must-haves”. Most of the capabilities were ranked either as performer or attractive. 11 out of 19 features were determined to be Performers. The more of these we prioritize, the more user will enjoy and benefit from them. 7 out of 19 features were considered to be Attractive. Including these features would provide an experience that our users know and expect. These features are to be prioritized after Must-Haves or Performance. 1 out of 19 features was identified as being Indifferent. So this feature will have a low impact and should not be prioritized over others.
Based on the satisfaction and the importance rating, we identified the top three for both groups. We mentioned the indifferent capability as a capability we should not focus on. Finally, we recommended prioritizing the most important performers and a few attractive ones. We used a prioritization pyramid (shown below) to explain this to our stakeholders. The graphic divides the capabilities into functional “Must-haves”, reliable and usable capabilities which are part of the “performance”, and last but not least the delightful capabilities which are “attractive”.
Through the participation of IBM employees, we were able to analyze the internal view of our customers. Out of 19 functionalities, we had a total of 5 where the IBM view was very different from our customers. Such analysis and results help us to reveal and point out our wrong perspectives.
Because of this research activity we were able to make the final call on how to structure our 2022 roadmap to migrate capabilities from our legacy offering to the latest version of IBM Watson® Knowledge Catalog. Not only was this very important in making the right decision based on user needs, but it also created a lot of trust between user researchers and project managers. We were able to help to make the right decisions in a complicated situation. Such decisions are often bets that can be made more comfortable with user input.
After the research study, we took a closer look at individual functionalities and started talking to customers again. We wanted to understand what can be improved during migration. Of course, we don't want to repeat old mistakes.