NBA (Next Best Action)

How NBA works

The system considers multiple factors simultaneously: who the customer is, what they’ve done previously, what they’re contacting about now, what similar customers needed in similar situations, and what outcomes you’re trying to achieve.

Then it calculates which action is most likely to produce the desired result. If the goal is resolution, it suggests the solution that worked most often for this type of issue. If the goal is retention, it recommends the offer this customer segment responds to best. If the goal is upsell, it identifies which product they’re most likely to want based on their profile and behaviour.

The recommendation appears on the agent’s screen in real-time. “Suggest troubleshooting step A first.” “Offer discount tier 2.” “Transfer to specialist team.” The agent can follow the recommendation or override it based on their judgment of the specific situation.

Why NBA matters

Consistency improves. Without NBA, outcomes depend heavily on which agent someone gets. One agent knows the best troubleshooting sequence through experience. Another hasn’t encountered this issue before and tries things randomly. NBA gives both agents access to what works, reducing the performance gap between experienced and new staff.

Decisions happen faster. Instead of agents thinking through options or searching for what to try next, the recommendation appears immediately. This speeds up handling without sacrificing quality, because the suggestion comes from analysis of what worked previously.

Outcomes get better. When recommendations are based on what drove success in thousands of previous interactions, they’re more likely to work than individual agents guessing. First contact resolution improves because agents try the right solution first rather than working through possibilities.

New agents perform sooner. Someone three weeks into the role can deliver outcomes close to someone three years in because they’re guided by what the experienced population does. The learning curve flattens when institutional knowledge becomes accessible through recommendations rather than locked in people’s heads.

Common NBA uses in contact centres

Retention decisions when customers want to cancel. The system analyses this customer’s value, their cancellation reason, what offers similar customers accepted, and recommends which retention offer to make. Not every customer gets the same offer. The recommendation reflects individual likelihood to stay based on their specific situation.

Troubleshooting sequences for technical support. Instead of agents working through steps randomly or following generic scripts, NBA suggests which diagnostic step to try first based on the symptoms described and what typically resolves this issue fastest.

Product recommendations during sales or upsell conversations. Based on what this customer already has, what they’ve shown interest in, and what similar customers bought, the system suggests which product to mention. This feels less pushy than scripted upselling because recommendations are actually relevant.

Escalation timing by recommending when to involve specialists or managers. Some situations resolve better with frontline handling. Others need expertise or authority immediately. NBA helps agents recognise when to escalate versus when to persist with their own resolution attempts.

Follow-up actions after interactions end. Should this customer get a satisfaction survey? A callback to confirm resolution? An email with additional information? NBA recommends appropriate follow-up based on interaction type, outcome, and customer preferences.

NBA versus scripting

Traditional scripts tell everyone to do the same thing in the same order. “First ask for account number. Then verify with date of birth. Then ask how you can help.” This works when interactions are predictable and one-size-fits-all makes sense.

NBA adapts recommendations to the individual situation. Two customers calling about the same issue might get different recommended solutions because their contexts differ – one has basic service, the other premium. One just renewed their contract, the other is out of commitment period. NBA accounts for these differences where scripts treat everyone identically.

Scripts provide certainty but inflexibility. NBA provides guidance but requires judgment. The right approach depends on your operation, your customer base, and your agents’ capability to apply recommendations sensibly rather than following them blindly.

What makes NBA recommendations good

Based on outcomes, not assumptions. Recommendations should come from data showing what worked previously, not what someone thinks should work. If data shows offering discount A retains customers better than discount B for this situation, recommend A regardless of what policy manuals say should happen.

Contextually appropriate. The same customer issue might warrant different recommendations depending on customer value, time of day, agent skill level, or current service level. Good NBA accounts for context rather than treating every instance identically.

Explainable to agents. When the system recommends something, agents should understand why. “This solution resolved 85% of similar cases in the past week” helps agents trust and apply recommendations. Mystery suggestions they’re told to follow without understanding erode confidence.

Overrideable without penalty. Agents sometimes know things the system doesn’t. The customer already tried the recommended solution. The situation has nuance the data doesn’t capture. Agents must be able to override recommendations when their judgment says otherwise, without being punished for not following the system.

Updated regularly. What worked last quarter might not work now. Recommendations need refreshing as customer behaviour changes, products evolve, or policies update. Stale NBA is worse than no NBA because it confidently suggests outdated actions.

Where NBA goes wrong

Optimising for the wrong outcome. If NBA optimises for shortest handle time, it’ll recommend quick fixes that don’t solve problems. If it optimises for highest upsell value, it’ll recommend pushy offers that damage satisfaction. The system delivers what you measure, so measure what matters.

Ignoring agent judgment. When NBA becomes mandatory rather than advisory, agents lose the ability to adapt to situations the system can’t account for. They become system followers rather than problem solvers. This works fine until they encounter something NBA wasn’t designed for, then they’re stuck.

Recommendations based on bad data. If the underlying data is messy, biased, or incomplete, recommendations will be wrong. Garbage in, garbage out. NBA built on poor interaction analytics produces confident bad advice rather than uncertain good advice.

Too many recommendations. Some implementations bombard agents with suggestions for every possible decision. What to say next. Which article to reference. Whether to transfer. What offer to make. Agents drown in recommendations and start ignoring all of them because it’s overwhelming.

No feedback loop. If the system never learns whether its recommendations worked, it keeps suggesting the same things regardless of actual outcomes. Good NBA tracks whether agents followed recommendations and what happened, then adjusts future suggestions based on results.

The agent experience

NBA done well feels like having an experienced colleague suggesting helpful next steps. “Based on what you’re seeing, try this first.” The agent remains in control but benefits from institutional knowledge they might lack individually.

NBA done badly feels like being nagged by software that doesn’t understand the situation. Irrelevant suggestions. Recommendations that ignore what the customer just said. Prompts to do things that don’t make sense. Agents learn to ignore it or work around it, and the technology becomes expensive clutter rather than genuine help.

The difference comes down to whether recommendations genuinely help agents achieve better outcomes with less effort, or whether they’re just another system to battle whilst trying to help customers.

Data requirements for effective NBA

NBA needs three things: customer data (who they are, what they’ve done, what they prefer), interaction data (what’s happening now, what’s been said, what’s been tried), and outcome data (what worked or failed in similar situations).

Without customer data, recommendations treat everyone identically. Without interaction data, recommendations ignore context. Without outcome data, recommendations are guesses rather than evidence-based suggestions.

The quality and completeness of this data determines NBA effectiveness. Partial data produces partial value. Wrong data produces wrong recommendations delivered confidently. Good NBA requires investment in data quality, not just NBA technology.

NBA and agent autonomy

The tension in NBA implementation is balancing guidance with autonomy. Too much guidance and agents become order-takers who cannot think independently. Too little and you’ve built expensive technology nobody uses because agents prefer their own judgment.

The sweet spot is recommendations that augment judgment rather than replace it. Agents see suggestions but decide whether they apply. They understand why recommendations exist but aren’t punished for overriding when situations warrant it. They’re guided but not controlled.

This requires trust both ways. Organisations must trust agents to apply recommendations sensibly. Agents must trust recommendations come from solid data and genuine insight rather than arbitrary system logic.

Measuring NBA impact

Track whether agents use recommendations and what happens when they do. If agents consistently ignore NBA, either the recommendations are poor or change management failed. If they follow recommendations but outcomes don’t improve, the underlying logic needs fixing.

Compare outcomes when recommendations are followed versus overridden. Sometimes agent judgment outperforms NBA and you learn where the system needs improvement. Sometimes NBA recommendations deliver better results and you identify agents who need coaching on when to trust the system.

Monitor first contact resolution, customer satisfaction, handle time, and whatever outcomes NBA is designed to improve. If these aren’t measurably better with NBA than without it, something’s wrong with implementation or the recommendations themselves.

Getting NBA right

Start with clear objectives about what outcomes NBA should improve. Resolution? Retention? Satisfaction? Handle time? You cannot optimise for everything simultaneously, so prioritise what matters most.

Build on solid data. Fix your interaction analytics and customer data quality before implementing NBA. Recommendations are only as good as the data driving them.

Involve agents in design and testing. They know what kind of guidance would help versus what would annoy them. Their feedback shapes NBA that gets used rather than ignored.

Start narrow and expand gradually. Prove NBA works for one high-volume scenario before rolling it out broadly. Learn what works, fix what doesn’t, then scale to other use cases.

Most importantly, make recommendations genuinely helpful. The test is simple: do agents feel NBA makes their job easier and outcomes better? If yes, you’ve built something valuable. If no, you’ve built technology for technology’s sake that’ll be ignored or resented whilst consuming budget and generating no value.

Your Contact Centre, Your Way

This is about you. Your customers, your team, and the service you want to deliver. If you’re ready to take your contact centre from good to extraordinary, get in touch today.