Why NPS became popular
Simplicity is the main appeal. One question. One number. Easy to understand, easy to track, easy to explain to executives. You don’t need statistical expertise to grasp that +40 is better than +20, or that dropping from +35 to +15 indicates problems.
The claim behind NPS is that it predicts business growth. Companies with higher NPS supposedly grow faster because Promoters drive referrals, repeat business, and positive word-of-mouth. Detractors do the opposite – they churn, complain, and warn others away.
This makes NPS attractive to leadership. Instead of tracking dozens of satisfaction metrics, you have one number that allegedly connects customer experience to business outcomes. If NPS goes up, growth should follow. If NPS drops, revenue suffers. Simple cause and effect.
The reality is messier. NPS correlates with some business outcomes in some industries sometimes. In others, the relationship is weak or non-existent. But the simplicity and the promise of predicting growth made NPS spread rapidly across industries including contact centres.
How contact centres use NPS
Most commonly, NPS gets measured through post-interaction surveys. After a call ends or a chat closes, customers receive the question. Their response feeds into overall NPS tracking.
Some organisations measure relationship NPS – overall willingness to recommend the company – separately from transactional NPS measured after specific interactions. The theory is relationship NPS reflects brand perception whilst transactional NPS reflects service quality.
Contact centres track NPS by channel, team, individual agent, issue type, time period, and any other dimension imaginable. Dashboards show NPS trends. Targets get set. Bonuses might depend on hitting NPS goals. The metric becomes central to how performance gets measured and managed.
The challenge is what you do with the number once you have it. Knowing your NPS is +25 tells you roughly where you stand. It doesn’t tell you why customers feel that way or what to fix to improve it.
The scoring system controversy
The NPS scoring bands create odd distortions. A customer rating you 6 out of 10 is a Detractor, counted the same as someone rating you 0. But someone rating you 7 or 8 – which sounds pretty positive – is a Passive who doesn’t affect the score at all.
This means improving someone from a 6 to a 7 or from an 8 to a 9 has zero impact on NPS. Only crossing the threshold from Detractor to Passive to Promoter changes the score. You could dramatically improve customer sentiment across your entire base whilst NPS stays flat because people moved from 5 to 7 instead of 6 to 9.
The bands also ignore nuance. Someone rating you 9 is categorised identically to someone rating you 10, but those might represent meaningfully different levels of enthusiasm. Similarly, a 1 and a 6 are both Detractors despite very different levels of dissatisfaction.
Defenders argue the simplicity is the point – you don’t need nuance, you need clear categories. Critics say the arbitrary bands throw away useful information and create perverse incentives.
Gaming and manipulation
Like any metric tied to performance, NPS gets gamed. Agents learn to influence scores through survey manipulation rather than service improvement.
“I’ll send you a quick survey. If you could give me a 9 or 10, I’d really appreciate it.” This begging for scores inflates NPS without improving anything about the customer experience. It just trains customers that 9-10 is what they’re supposed to give regardless of actual satisfaction.
Some operations cherry-pick when to send surveys. Interactions that went well get surveyed. Difficult ones don’t. This produces artificially high NPS that doesn’t reflect overall service quality.
Others manipulate through timing. Send the survey immediately after resolution when the customer is relieved and happy. Don’t send it three days later when they might have realised the problem recurred or the solution was incomplete.
Survey design influences responses too. Framing matters. “How likely are you to recommend our award-winning service?” produces different responses than just asking the neutral question.
All of this gaming means published NPS might have little relationship to actual customer loyalty or satisfaction. You’re measuring survey manipulation effectiveness rather than service quality.
What NPS misses
Why people score the way they do gets lost. You know someone’s a Detractor, but you don’t know if they’re unhappy about product quality, price, service, or something completely different. The score provides no diagnostic value.
Context disappears. A 7 might mean “pretty good considering my expectations were low” or “disappointing given what I paid.” Same number, completely different meanings.
Segment differences get averaged out. Some customer segments might rate everything lower culturally whilst being perfectly loyal. Others rate generously but churn readily. NPS treats all scores identically regardless of what they mean for different groups.
Effort isn’t captured. A customer might give you a 10 because you eventually solved their problem, but it took three contacts over two weeks. High score, terrible experience. NPS misses this completely.
Trends within categories stay invisible. Your Promoter percentage might be stable whilst the distribution between 9s and 10s shifts dramatically. That might signal important changes, but NPS aggregation hides it.
Alternatives and complements
Customer Effort Score (CES) asks “How easy was it to resolve your issue?” This often predicts loyalty and repeat contact better than NPS because effort is what customers remember most vividly.
Customer Satisfaction (CSAT) asks “How satisfied were you with this interaction?” More direct than the recommendation proxy NPS uses. Someone might not recommend you but could be perfectly satisfied with the service received.
First Contact Resolution measures whether problems get solved without repeat contacts. This actually drives customer experience rather than just reflecting it.
Sentiment analysis of actual conversations reveals how customers feel based on what they say during interactions, not what they rate afterwards in surveys.
Most contact centres track multiple metrics rather than relying solely on NPS. The combination provides fuller understanding than any single number could.
Using NPS sensibly
If you’re going to use NPS, treat it as one indicator among several rather than the definitive measure of success.
Track trends, not absolute scores. Whether you’re at +30 or +50 matters less than whether you’re improving or declining. Movement indicates change. The specific number is less meaningful than the direction.
Segment thoughtfully. Don’t just track overall NPS. Break it down by customer type, issue type, channel, and team. Patterns at this level tell you where problems exist and where you’re succeeding.
Collect qualitative feedback alongside scores. Ask why someone scored the way they did. The open-ended responses provide the diagnostic value the number lacks. Someone rating you 6 because of product issues versus service failures needs different responses.
Don’t incentivise the score directly. When agents or teams get bonused on NPS, gaming becomes inevitable. Incentivise the behaviours that drive good experiences – first contact resolution, low effort, fast resolution – and let NPS reflect whether those behaviours work.
Avoid survey fatigue. Sending NPS surveys after every interaction annoys customers and tanks response rates. Sample intelligently rather than surveying everyone constantly.
Act on feedback, don’t just measure it. If you’re not using NPS data to drive improvements, stop collecting it. Asking customers for opinions then ignoring them damages trust more than not asking at all.
The relationship to contact centre performance
Good contact centre performance should drive higher NPS, but the relationship isn’t straightforward. You can deliver brilliant service on interactions that customers resent having to make in the first place. Your agents can be lovely whilst the underlying product or process is terrible.
NPS reflects the total customer experience, not just contact centre quality. When product issues, billing problems, or policy failures drive contacts, even perfect service delivery might produce Detractors. The customer’s unhappy about the situation, not the support, but the NPS score gets attributed to the contact centre.
This makes NPS problematic for measuring contact centre performance specifically. It picks up too much that contact centres don’t control. Better metrics focus on things the contact centre influences directly – resolution, effort, handle time, quality – whilst tracking NPS as a broader business indicator.
The verdict on NPS
NPS isn’t useless. It provides a simple, trackable indicator of customer sentiment. Tracked consistently over time, it shows whether things are improving or deteriorating. Broken down by segment, it reveals patterns worth investigating.
But it’s not magic. It doesn’t predict growth as reliably as proponents claim. It’s easily gamed. It lacks diagnostic value. It obscures nuance. And it’s not the only metric worth tracking.
Organisations that treat NPS as the sole measure of customer satisfaction are missing important signals. Those that ignore NPS entirely are throwing away a useful if imperfect indicator. The sensible approach sits between these extremes.
Use NPS as one input among several. Track it consistently. Segment it thoughtfully. Act on patterns it reveals. But don’t worship it as the ultimate truth about customer loyalty or dismiss it as meaningless theatre. Like most metrics, it’s useful when used properly and misleading when misapplied.
The goal is happy, loyal customers who keep buying and tell others good things about you. Whether that shows up as high NPS, low effort, good CSAT, or some combination matters less than whether you’re delivering the experience that drives the outcome. Don’t let the metric become more important than the reality it’s meant to measure.
Your Contact Centre, Your Way
This is about you. Your customers, your team, and the service you want to deliver. If you’re ready to take your contact centre from good to extraordinary, get in touch today.

