Comparing brand experiences between consumers is difficult.
Apples with Apples?
Research groups are good but expensive, time consuming and tend to focus on product not experience. Net Promoter Score (NPS) programmes are excellent for Staff Customers and Suits to ‘get’ and personally I am a fan having implemented many a successful programme.. However, how do you resolve the individual differences that may lead me to give an experience an 8 and another to give a 7. Is it possible to calibrate what an 8 is to minimise getting the wrong steer from a satisfaction programme’s results?
@SHRINIVASDHARMA has some interesting thoughts via the MOSTER system which says up front to a customer – tell me what your ‘scale’ is and this is then applied on a weighted basis to their answers. I’m no mathematician but this intuitively feels like it’s got legs. I’m interested in any other thoughts from the customer satisfaction (CSAT) community on this one.