Antipattern: Would you recommend it?

Team Coffea was excited to launch a new feature to get feedback from their product’s end users. It was a little box with a question “Would you recommend our product to a friend or colleague?” with a rating from 0 to 10. They were concerned because just a few customers were actively using their product. So armed with their NPS (Net Promoter Score) metric, they would be able to understand how to improve their product and make it really profitable. After 300 answers, they discovered their NPS was 25. They were happy, as 25 would be considered a strong enough score. However, they asked themselves “So what now?”. They didn’t know what to do with the metric.

With the purpose of having a single question instead of a complex customer satisfaction survey, Net Promoter Score was created by Frederick F. Reichheld in 2003, so companies could actually put consumer survey results to use and focus employees on the task of stimulating growth. The base question is “How likely is it that you would recommend [company X] to a friend or colleague?”.

NPS is a quantitative score, based on recommendation. The problem is we don’t learn from NPS, so we can’t take direct action from it. Of course, we can reflect on how we might improve the recommendation of our product. However, we don’t need the score for that. Moreover, if our customer really recommends the product, we don’t need to measure NPS. We’ll see it in practice, people talking in glowing terms about it. Our customer base grows. So publishing a NPS is in reality just a vanity metric.

There’s another problem – NPS doesn’t explain what a score of 4 is vs a score of 8. The scores are subjective. 7 doesn’t mean it’s really neutral – it is simply a standalone number. A 9 for me can be a 5 for you, for the same level of recommendation. Yes, we still can add an explanation for each score or range. Furthermore, the question is a supposition with “would.” It is like asking a potential customer if they would buy a car. They might say “of course!” However, when it is time to really buy it – to hand cold, hard cash across the counter – they think twice and don’t buy it.

The same applies for Customer Satisfaction Score (CSAT) or Customer Effort Score (CES). Even more specific for the value proposition being offered, through a feature or a service, they are not directly actionable metrics.

Pattern: Customer Impact Score

Customer Impact Score (CIS) is a short and focused question on measuring, learning and acting. We can learn a lot from it, bringing insights about the customer’s needs from the product or service offered. It is directly actionable.

There are two questions: qualitative and quantitative – one more than NPS. So we may think “fewer customers will answer the two questions survey”. Probably. However, it is far more actionable having a hundred answers for it, than thousands for NPS. If there are a million answers for NPS, so what? We’ll struggle to take meaningful action from it.

How to measure it

So, how does CIS work? It is quite simple – we ask our customers the questions below:

Question 1: How much of your need was accomplished by this service/feature?

  • (_) Totally; (100%)
  • (_) Mostly; (66,6%)
  • (_) Partially; (33,3%)
  • (_) None. (0%)

Question 2: Why did you give this rating?

  • [a few blank lines for answering]

Customer Impact Score is the average of a customer’s needs being accomplished: 3. Totally represents 100%, 2. Mostly 66,6%, 1. Partially 33,3%, and 0. None 0%.

Benefits:

  • We can validate if the customer’s goal is aligned with the goal of the product/service.
  • We can segment customers based on the answers, taking actions specific to each (even directly from the moment of the answer).
  • Not every dissatisfied customer needs to be serviced fully. Some customers represent segments we are not interested in focusing on, and we can validate it on the qualitative answer.