AI is in the news more than ever, thanks to ChatGPT and generative AI. Businesses across all industries are in the process—or have already—made plans to strategically invest in AI.In the contact center space, 99% of companies recently surveyed by NICE, say they plan to invest in AI analytics-driven quality management. This dramatic percentage reflects the fact that organizations are aware that their current method(s) of assessing agent performance is sub-par. More crucially, some of these outdated sampling practices have led to misinformed decision making. Other studies agree—according to Aberdeen, 75% of executives want to make better use of their interaction data (by using AI).Let’s take a closer look at how AI can improve how businesses gather and use data. Many contact centers rely on random sampling of interactions to evaluate the performance of their agents and gain insights from customer interactions. They use these samples to help identify areas of improvement for agents, ensure that agents are adhering to call scripts or regulatory requirements, and identify common issues that can inform the development of training materials and targeted coaching. In some cases, these results even impact agent compensation.Random sampling is not without its challenges, however. Chief among them is the problem of inadequate or unrepresentative sampling. NICE commissioned a survey of 400 senior decision-makers—supervisors, managers, directors and VPs who work in customer care, customer service, or contact center departments with at least 200 agents across all industries in the U.S. and the U.K.— to better understand the relationship between agent soft skills, customer satisfaction, and the potential of artificial intelligence (AI) to revolutionize how we evaluate agent performance.One of the key focuses of the survey was the sampling practices of the contact centers, as well as their perception of how AI could improve those practices and CX goals and outcomes. Here’s what we learned.
Sampling is inadequate: Contact centers rely on skewed or random data to make critical decisions
Contact centers may not sample every interaction, but they often implement strategies to ensure a representative and meaningful sample. This can include random sampling, stratified sampling based on interaction types or customer segments, or sampling that’s targeted for another specific evaluation purpose. The goal is to strike a balance between resource constraints, operational efficiency, and the ability to gain reliable insights that can be used to drive continuous improvement in customer service.In reality, however, sampling performed in most contact centers is far from representative—it encompasses a very small percentage of the overall interactions that are typically handled each month. According to our survey, the average contact center measures just 14 voice and digital interactions each month, and more than a quarter of them currently measure fewer than 10 interactions each month. Given that all of the respondents work for contact centers with more than 200 agents, this is an insignificant sample size, statistically speaking, and not representative of agent performance.In addition, nearly two-thirds of the contact center leaders we surveyed choose samples based on post-interaction customer satisfaction surveys, which are known for attracting either highly satisfied or highly unsatisfied customers, further skewing the sampling process. CSAT surveys also tend to have a relatively low response rate, representing a small sample of customers.Other methods of selecting interactions for evaluations include:
Targeted based on speech analytics categories (55%)
An automatically selected random sample (51%)
Targeted based on specific data points (48%)
Targeted based on desktop analytics categories (42%)
Manually selected random samples (30%)
Despite the lack of a statistically significant or holistic view, 85% of stakeholders use this data to make critical business decisions.
Teams don’t trust the process: Agents dispute performance feedback due to unrepresentative samples
The goal of any quality management program is to assess agent performance and provide feedback, but programs that rely on evaluators listening to a small random sample of calls and interpreting the results are inherently biased. This erodes confidence in the process. Left feeling that their evaluations are unfair, agents are often resistant to the feedback provided. In fact, 41% of contact center leaders say one of their top challenges in quality management is that agents don’t buy into their current feedback. Other top quality management challenges, according to our survey, are that evaluators are using a small sample size that is not representative of overall agent performance (38%) and that random sampling is not representative of agent performance (38%).When feedback is inconsistent and the sample size is too small, it’s no surprise that agents will not want to accept the results and therefore won’t buy into the program.
A path forward
The survey results clearly illustrate that stakeholders are struggling to improve quality management. AI can easily solve this problem by analyzing 100% of all interactions to improve operational efficiencies and deliver more positive experiences.Learn more about what we uncovered in our survey about the current state of quality management and why a growing number of contact center leaders are turning to AI to modernize their processes.