Categories
November 11, 2025
From boardroom skepticism to belief in NPS, discover how Frederick Reichheld’s advocacy metric transformed how businesses measure customer loyalty.
In the early 2010’s, I stood in the boardroom of a mid-sized software company in the Dallas-Fort Worth area, presenting a proposal for a customer satisfaction program. The CEO, a no-nonsense Texan, listened intently before challenging me: “You’re a consultant, you’d say just about anything to sell this program.” I responded with conviction, explaining that I wasn’t selling, I was advocating for a proven approach that had delivered results across a wide variety of industries.
That approach was Net Promoter Score (NPS), a metric considered the state of the art thanks to Frederick Reichheld’s December 2003 Harvard Business Review article, The One Number You Need to Grow. Reichheld’s work transformed how companies measured customer satisfaction by introducing a more demanding standard: advocacy. Instead of asking whether customers were satisfied, NPS asked whether they would recommend a brand to others. It was a powerful shift, and I believed in it.
NPS was revolutionary at the time, but like all tools, it needs refinement. After years of deploying and analyzing NPS programs, two fundamental flaws have become clear to me:
The standard NPS question, “How likely are you to recommend our brand?” is asked in isolation. It assumes the customer has no other options. In reality, customers constantly compare brands to other options they have for a particular product or service. Without a competitive context, NPS lacks the depth and relevance that come from being compared to its competition.
The NPS question uses a 0-10 Likert scale, a 90-year-old approach that doesn’t reflect how people naturally evaluate experiences or their attitudes. Consumers don’t assign numeric scores to their daily interactions. Their responses are influenced by mood, familiarity, and survey fatigue. The result? Vague, undifferentiated data that often fails to provide actionable insights.
These issues have led to widespread consumer fatigue. The NPS question is so ubiquitous that many respondents don’t even read it. They default to a score (in my case, an 8) and move on. The proliferation of online surveys has trained people to respond quickly and move to the next question. This undermines the very purpose of the metric. Researchers work hard to craft a fantastically worded question. If you are asking the consumer to give you a score as their answer, there is a good chance that the respondent hasn’t read your question. In the case of NPS, consumers are asked so often about their likelihood to recommend brands that many consumers either no longer bother to respond or simply give their baseline score.
At SCORE Metrics, we’ve developed a solution that addresses both flaws in the NPS question as it is asked today: HauckEye’s Sort & Score technique.
Sort: Respondents rank brands in the client’s category from most to least likely to recommend. This introduces competitive context and forces meaningful comparisons.
Score: Respondents allocate 100 points across the brands to reflect their advocacy priorities. This replaces the Likert scale with a more intuitive and differentiated measure.
The result? Richer data, clearer insights, and a more accurate reflection of customer sentiment. The real benefit is that our clients learn how they compare, in the minds of their customers, the other competitors in the industry. This difference makes the data considerably more actionable, bringing real value to the measure.
In April 2024, I ran a test of the HauckEye Sort & Score method. In conjunction with Dr. Brad Jones of YouGov, HauckEye ran an experiment of the Sort & Score approach for NPS and for an attribute panel. We compared a Likert scale design used in the NPS approach (control cell) to our Sort & Score design (test cell). Our Sort & Score design introduces a competitive context without using a scale.
This test was designed as an experiment in which each recruited respondent is equally likely to be in either the control or the test cell. The most stringent pharmaceutical testing uses this approach to confirm the viability of a new medication. Experimental design is the basis of the scientific method and the best way to compare two different methodological approaches.
This is a simple test-and-control experimental design, with the variable being analyzed as the amount of differentiation between attributes compared between cells. Our hypothesis is that differentiation in the measures will be significantly greater in the test cell, proving our hypothesis that another approach to data collection will produce differentiated and more meaningful results than today’s NPS.
|
Interview Statistics |
Likert (control) |
Sort & Score (test) |
|
US Nationally Representative sample of adults provided by YouGov. |
n=589 |
n=581 |
|
Mean survey length in minutes |
6.9 |
9.3 |
|
YouGov Survey Satisfaction: One last thing before you go. How was this survey experience? 1. Excellent, 2. Good, 3. Fair, 4. Poor. |
Mean=2.5 |
Mean=1.5 |
Our first observation was that our Sort & Score approach performed better than the series of Likert scale questions on YouGov’s survey experience satisfaction scale. Please note that we accept the irony associated with using a Likert scale here to compare the two respondent experiences. On average, it takes more than two minutes to complete the Sort & Score exercise version in the test cell. The evidence suggests that the test cell exercise takes more thought than completing a series of repetitive Likert scale questions. And that extra time to think about the question makes a big difference.
Traditionally, the NPS question is only asked of the brand of interest. To provide a comparative format, we asked the same NPS question of key brands in the mobile telephone service industry: How likely are you to recommend each of these brands to a friend or colleague?
NPS Survey Question
|
Likert Scale (control) Cell |
Sort & Score (test) Cell |
|
|
Question sequence |
On a scale from 1-10, where 1 means not at all likely and 10 means very likely, how likely are you to recommend each of these brands to a friend or colleague? [PRESENT EACH BRAND ONE AT A TIME] |
Q1. Please sort these brands from most likely to recommend to a friend or coworker to the least. Q2. Please allocate 100 points across all these to show how likely you are to recommend each brand. If your top brand is the only one you recommend, give it 100 and the rest a 0. If you are equally likely to recommend, share the points. Just remember that you can’t allocate more points to a lower-ranked item to you. |

The data looks very different when the respondent is engaged in a more complex exercise set in a competitive context. There is much more differentiation across attributes at the aggregate level than when you ask individual Likert scale questions.
This experimental design of these two approaches to understanding consumer preferences has proven that the traditional Likert scale approach is less effective in showing differentiation across the brands than the Sort & Score design. Normally, with the traditional approach, you would only ask about your brand and not the others, and would therefore only have one data point. In the Sort & Score approach, you ask about your brand and the competitors simultaneously, and end up with a competitive context for the results. The fact that the data is strongly differentiated by brand provides much more information than the traditional approach. The traditional Likert scale question approach is tired and dated, making an alternative approach essential to understanding your market.
Sort & Score aligns with how people naturally think about brands. It encourages thoughtful evaluation and provides a relative measure of performance. Organizations gain a clearer picture of where they stand and how much they need to improve.
The traditional “likelihood to recommend” question provides a score for your brand. It says, hey, congratulations, your customers on average give you a 7.3 and that means they are generally promoters of your brand. But what is that promoter score really telling you? Do you honestly know what to do or what you can do with that information?
But what if you found out that your customers were more likely to recommend two of your competitors over your brand? The SCORE Metrics approach, using HauckEye’s Sort & Score technique tell you where your brand stands relative to your competition. And frankly, that is far more valuable than knowing you have improved your NPS average score by 0.2 in the past quarter and 0.1 over the past year. I would rather measure myself against my competitors than against myself in a vacuum. I prefer game speed over practice any day.
NPS was a leap forward in its time, but it’s no longer sufficient on its own. Just as customer satisfaction research evolved into NPS, it’s time for NPS to evolve into something better. HauckEye’s Sort & Score offers a practical, effective way to do just that. If you have been struggling with a moribund NPS program, consider using a competitive measure like Sort & Score to improve your understanding of your competitive positioning within your business.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.