Why forecasting matters
Everything in contact centre planning depends on the forecast. Workforce planners use it to determine how many agents to schedule. Finance uses it to budget labour costs. Operations uses it to set service level expectations. Recruitment uses it to decide when to hire.
When forecasts are accurate, staffing matches demand. Service levels hit target. Costs stay predictable. Agents have manageable workload. Customers get answered quickly.
When forecasts are rubbish, chaos follows. You’re perpetually caught short or overstaffed. Service levels swing wildly between brilliant and disastrous depending on whether volume landed where you predicted. Agents burn out from constant firefighting or get demoralised sitting idle. Costs spiral because you’re either paying for overtime or wasting money on capacity you don’t need.
The difference between a contact centre that feels in control and one that feels constantly overwhelmed often comes down to forecast accuracy.
How forecast accuracy gets measured
Most contact centres track forecast accuracy using mean absolute percentage error (MAPE). That’s a fancy way of saying: on average, how far off were your predictions as a percentage of actuals?
Example: You forecast 1000 calls. You got 900. Your error is 100 calls, which is 10% of actual volume. That’s your MAPE for that interval.
Calculate this across all your forecast intervals (typically 15 or 30-minute periods throughout the week) and average the results. That’s your overall forecast accuracy.
Industry standards consider:
- Above 95% accuracy: Excellent – you’re nailing it
- 90-95% accuracy: Good – within acceptable range for most operations
- 85-90% accuracy: Acceptable but needs improvement
- Below 85% accuracy: Problem territory – forecasting is actively hurting performance
These benchmarks apply to short-term forecasts (week-ahead or day-ahead). Long-term forecasts (months or quarters ahead) naturally have lower accuracy because more variables can change.
Why forecasting is brutally difficult
Predicting human behaviour is hard. Customers don’t contact you on a neat, predictable schedule. They respond to triggers you can’t always anticipate or control.
Unexpected external events
Marketing sends an email campaign without telling operations. Volume spikes 40% with no warning. Your forecast was perfect for normal Tuesday patterns but completely wrong for Tuesday-plus-surprise-campaign.
Weather changes. System outages. Competitor pricing changes. News stories. Product recalls. Social media complaints going viral. Regulatory changes. Any of these can instantly make your carefully constructed forecast obsolete.
The more volatile your environment, the harder forecasting becomes. Retail contact centres around Black Friday or Christmas face wildly unpredictable patterns. Utilities during storms deal with volume spikes that make normal forecasts meaningless.
Long-term pattern changes
Customer behaviour shifts gradually. More people prefer chat over phone. Self-service deflects certain query types. Product improvements reduce support contacts. These long-term trends slowly make historical patterns less predictive of future volume.
Forecast models based on last year’s data might miss that the overall shape of demand has changed. What worked for predicting last January’s volume won’t work for this January if customer channel preferences have shifted dramatically.
Seasonality within seasonality
Most contact centres understand weekly patterns (Monday is busy, Friday is quiet) and annual patterns (tax deadline spikes, holiday shopping peaks). But patterns exist within patterns.
First Monday of the month behaves differently than other Mondays because bills arrive. School holidays change weekday patterns. Bank holidays create unusual clustering. Paydays drive specific query types.
Forecast models need to account for these layered patterns or they’ll consistently miss on specific days that don’t fit the standard weekly template.
Handle time is harder than volume
Volume forecasting gets most attention, but handle time forecasting matters just as much for staffing calculations. You can perfectly forecast volume but still staff wrong if handle times land differently than predicted.
Handle time varies by query type, agent experience, time of day, and system performance. When your CRM is slow, handle times stretch. When experienced agents are on shift, handle times shrink. When complex queries cluster together, averages shift.
Most contact centres forecast average handle time, but that hides dangerous variation. Your forecast might nail the average whilst completely missing that morning volume was simple queries (short handle time) and afternoon volume was complex complaints (long handle time).
What kills forecast accuracy
Poor data quality
Forecast models learn from historical data. If that data is messy, incomplete, or incorrectly categorised, forecasts built on it will be wrong.
Common data problems:
- Disposition codes applied inconsistently so query types are miscategorised
- System glitches creating gaps in historical volume data
- Handle time calculations including or excluding after-call work inconsistently
- Abandons not properly tracked so you don’t know true demand
Garbage in, garbage out. Before worrying about forecast methodology, fix your data.
Ignoring business context
Pure statistical models find patterns in historical data but don’t understand business context. They don’t know about upcoming product launches, marketing campaigns, or policy changes that will shift customer behaviour.
The best forecasting combines statistical modelling with human judgment. Forecasters talk to marketing, operations, and product teams to understand what’s changing. They adjust baseline statistical forecasts with business intelligence about upcoming events.
Without this context, you’ll consistently get surprised by volume spikes that were entirely predictable if anyone had bothered to ask marketing what they were planning.
Not updating assumptions
Forecast models use assumptions about patterns and relationships. When reality changes but assumptions don’t, accuracy degrades gradually until you’re essentially guessing.
Maybe your model assumes 30% of volume arrives in the first hour. That was true last year. This year it’s shifted to 40% because customer behaviour changed. Your model keeps predicting the old pattern whilst reality has moved on.
Regular model validation and assumption updates are essential but often get skipped during busy periods. By the time you notice accuracy has tanked, you’ve been operating with broken forecasts for weeks.
Improving forecast accuracy
Start with better data
Invest in clean, consistent contact data before worrying about sophisticated forecast models. Ensure disposition coding is accurate. Track abandons properly. Capture true demand, not just answered contacts. Clean up outliers and system glitches in historical data.
Even simple forecast methods work reasonably well with good data. Sophisticated models fail spectacularly with rubbish data.
Account for calendar effects
Public holidays, paydays, month-end, school holidays – all create patterns that pure day-of-week models miss. Modern workforce optimisation tools include calendar awareness so forecasts automatically adjust for these effects.
Your forecast model should know that the first Monday after a bank holiday weekend behaves differently than a normal Monday.
Build in business intelligence
Create processes for marketing, product, and operations teams to flag upcoming events that might affect contact volume. Email campaigns, price changes, product launches, policy updates – anything that might drive customer contacts needs to feed into forecasting.
This doesn’t mean perfect prediction of every spike, but it prevents repeatedly getting blindsided by entirely predictable events.
Monitor and adjust in real-time
Your forecast is wrong the moment reality starts. The question is how wrong and whether you can adjust quickly enough to limit the damage.
Real-time monitoring compares actual volume to forecast throughout the day. When variance exceeds thresholds, escalate for decision-making. Can you pull agents from back-office work? Offer overtime? Adjust breaks? Push self-service messaging?
The faster you spot forecast misses, the more options you have to recover.
Use multiple forecast models
Different forecast methods excel in different conditions. Simple moving averages work well for stable patterns. Time series models like ARIMA handle seasonality better. Machine learning approaches can spot complex relationships but need lots of clean data.
Many workforce planning systems run multiple models and either blend them or select the best performer based on recent accuracy. This reduces the risk of any single model failing catastrophically.
Measure and learn
Track forecast accuracy consistently. Not just overall MAPE but accuracy by day of week, time of day, and query type. Where are you consistently wrong? Which patterns do your models miss?
This analysis identifies systematic biases. Maybe you always underforecast Monday mornings or always overforecast Friday afternoons. Once you know where you’re consistently wrong, you can adjust.
The cost of poor forecasting
Forecast errors cost money in both directions. Underforecast and you’re paying overtime, burning out agents, and delivering poor service that damages customer satisfaction and retention. Overforecast and you’re paying agents to sit idle, wasting labour budget you could have used elsewhere.
But the costs aren’t symmetrical. Most contact centres would rather slightly overforecast than underforecast because the service level and customer experience damage from being caught short hurts more than the wasted capacity from having too many agents.
This creates pressure to build buffer into forecasts. Your model says 450 calls, but you plan for 500 “just in case.” This improves service resilience but reduces forecasting discipline. Over time, forecasts become increasingly conservative and divorced from what models actually predict.
Forecast accuracy and AI
As contact centres adopt AI for deflection and agent support, forecasting becomes more complex. AI for customers changes both total volume (through deflection) and the mix of contacts reaching agents (simple queries disappear, complex ones remain).
This shifts historical patterns, making old data less predictive. Your forecast model trained on pre-AI data will consistently overestimate volume post-AI implementation. But the reduction won’t be uniform – some query types deflect effectively whilst others don’t.
Similar complications arise with AI agent assistance. Handle times might reduce but not consistently across all interaction types. Forecasters need to understand how AI adoption changes the fundamental patterns their models rely on.
The realistic target
Perfect forecast accuracy is impossible. Customer behaviour is inherently unpredictable, and unexpected events constantly disrupt patterns. The goal isn’t perfection – it’s being accurate enough that staffing decisions work most of the time.
For most contact centres, 90-95% forecast accuracy provides the foundation for reliable operations. You’ll still get surprised occasionally, but you’re right often enough that service levels, costs, and agent experience remain stable.
Below that, you’re essentially operating blind. Above that, you’re probably overinvesting in forecasting sophistication when simpler methods would work fine.
The key is recognising that forecasting is part science, part art, and part acceptance that sometimes you’ll just be wrong despite your best efforts. What matters is being right often enough and recovering quickly when you’re not.
Your Contact Centre, Your Way
This is about you. Your customers, your team, and the service you want to deliver. If you’re ready to take your contact centre from good to extraordinary, get in touch today.

