Take on any CX challenge with Pipeline+ Subscribe today.

Agent Performance Metrics

Agent Performance Metrics

Agent Performance Metrics

Excerpted findings from our 2016 Agent Performance Final Report.

The contact center agent’s role has transformed dramatically over the past decade, as have the objectives and measures that operations rely on to drive individual and group performance. In today’s customer-centric organizations, agents are less likely to be held accountable for efficiency metrics that they have little control over, and which result in undesirable behaviors that negatively impact the service experience. We’ve seen more centers emphasizing voice of the customer feedback in their performance measurement lineup to better reflect the agents’ emerging status as brand ambassadors. But we wondered how many centers have revised their performance measurement practices.

In July-August 2016, Contact Center Pipeline partnered with Transera, a BroadSoft Company, to conduct a survey to examine how centers are measuring individual agent and group performance, and to gather leaders’ insights on how well those KPIs align with overall business objectives. A total of 249 participants provided input on KPIs as well as some of the processes and tools in place to drive agent and group performance. The following is an excerpt of our findings.

Which Agent KPIs Are Contact Centers Tracking?

Contact centers traditionally have applied the rule of thumb that “if you can’t measure it, you can’t manage it”—and certainly there is no lack of metrics that can be easily accessed from today’s contact center systems. Figure 1 shows the most common KPIs being tracked by overall survey participants, and broken out by single vs. multiple sites. In addition to the KPIs listed in the chart, other metrics tracked by tech support centers included escalations, accuracy and availability. Customer service centers also reported measuring order accuracy, average handle time, speed to answer and utilization rate; while other sales center metrics included sales per hour, abandonment rate, attendance and number of callbacks.

Agent KPI Effectiveness

Every agent performance KPI is undoubtedly influenced by other metrics and, therefore, presents only one piece of the performance puzzle. For instance, customer satisfaction ratings are affected by how long the caller is on hold, how well the agent handles the call, whether or not the customer’s issue is resolved on the first contact, among other things. Thus, to be effective, KPIs should be evaluated in conjunction with other key metrics.

That being said, we asked participants to rate the effectiveness of the individual KPIs that they use to determine agent performance. Over two-thirds (70%) cited call quality as a “very effective” metric for determining agent performance, with one-quarter (25%) rating it as “somewhat effective.” Over half (54%) of centers said that customer satisfaction survey ratings were “very effective,” with another 37% saying that they were “somewhat effective.” FCR was considered “very effective” by 51% of participants, and 41% found schedule adherence to be a “very effective” measure. (See Figure 2.)

Other widely tracked KPIs, such as length of calls, was found to be “very effective” by 28% of participants, with over half (55%) stating that it was “somewhat effective.” And while 51% of centers found number of calls to be only “somewhat effective” as an agent KPI, another 32% said it was “very effective.”

Less than a quarter (23%) of survey participants reported having a sales function within their center. However, of those that provided sales, almost half (48%) pointed to number of sales or upsells as a “somewhat effective” KPI, with another 41% rating it as “very effective.”

Call quality (67%), customer satisfaction survey ratings (53%) and FCR (47%) were rated by single-site centers as “very effective” KPIs. The majority (73%) of multiple-site operations ranked call quality as a “very effective” metric, and 59% of multisite centers also found FCR to be “very effective” for determining performance. Customer satisfaction survey ratings (56%) rounded out the Top 3 “very effective” KPIs cited by multiple sites.

Aligning Contact Center KPIs with Business Goals

Over the last decade, contact centers have largely moved away from the aggressive cost-cutting tactics employed during the great recession to focus more on optimizing operational efficiency and performance. Today, centers are poised on the brink of the next evolution into high-value customer-facing operations capable of delivering key insights to drive customer loyalty, innovation and business growth.

But as the contact center’s role begins to coincide more closely with the organization’s strategic objectives for market and financial success, are frontline performance metrics delivering the desired business outcomes? We asked survey participants how well the KPIs they use to measure individual and group performance map to the true business goals of the company.

Sixty-one percent (61%) of participating centers felt that customer satisfaction survey ratings “mapped well” to their companies’ business goals, as did call quality (59%) and FCR (49%). (See Figure 3.)

When considering their KPIs as a whole, contact centers seemed less than confident that their choice of metrics aligned well with the organization’s strategic objectives. Only 42% of service centers and 41% of tech support centers rated their total KPIs as “mapping well” to the true business goals of the company. About half of service centers (51%) and tech support centers (52%) felt that their total KPIs “somewhat mapped.” For sales centers, only 37% believed that their KPIs, as a total, aligned effectively with the organization’s business goals, and 58% of leaders classified their KPIs as “somewhat mapped” to overall goals.

Tracking Individual and Group Performance

The majority of participants (90%) indicated that they use multiple contact center systems or applications to track and measure performance (e.g., ACD, QM, CRM, WFM and OE).

We asked the participants who indicated that they used multiple systems or applications for tracking agent performance how the data from these systems is being integrated. Forty-four percent (44%) said that they manually integrate the data using Excel or some other spreadsheet software. Another 29% rely on automated reports that integrate data from multiple systems, while 18% use a custom-developed data warehouse or business intelligence (BI) application; and 9% said that the data is not integrated at all.

As one might expect, a manual integration process was more common in single-site centers than multisite centers: 47% of single-site centers reported using a manual approach to integrate data from multiple systems vs. 38% of multisite operations. Multisite operations were more likely to have a custom-developed data warehouse or BI application—23% of multisite centers relied on custom solutions vs. 15% of single-site centers.

Only 23% of multisite centers rated their system, application or process for tracking and measuring agent performance as “very effective” vs. 29% of single-site operations. For the most part, 64% of multisite centers felt that their system was “somewhat effective,” and 56% of single sites said the same. Fifteen percent (15%) of single-site centers said that their process was “not very effective,” compared to 12% of multisite centers.

Agent KPI Tracking and Measurement Frequency

The regular tracking and measurement of KPIs helps to ensure a strong alignment between agent performance and strategic business objectives. More frequent tracking allows managers to quickly identify and fix situations that have the potential to escalate, pinpoint individual improvement opportunities so that they can provide timely and relevant feedback to agents, and also to recognize agents for exceptional performance.

Overall, less than one-quarter (22%) of participants tracked and measured agent performance on a real-time basis; 37% reported tracking performance on a daily basis; 22% did so weekly; and 16% said they track performance on a monthly basis. Figure 4 shows how single-site centers compared to multiple sites in their tracking frequency.

Almost a third (31%) of contact centers that reported using automated or custom systems for integrating agent performance data use a real-time tracking process compared with 11% of those that said they either manually integrate performance data or don’t integrate the data at all.

Call center size plays a role in how frequently performance is tracked and measured. Larger centers (500+ agents) reported tracking agent performance more frequently with one-quarter (25%) using a real-time process for tracking and measuring performance, while over half (53%) used a daily process. 13% said they do so weekly, and 10% do so on a monthly basis.

Small centers ( <50 agents), on the other hand, tracked performance less frequently. Only 18% tracked performance on a real-time basis. Just over a quarter (26%) tracked on a daily basis, while 30% did so weekly. Another 22% said they tracked performance monthly, and 5% said quarterly.

Only 19% of survey participants overall felt that their process for tracking and measuring agent performance was “very easy.” More than half (51%) described it as “somewhat easy,” while just over a quarter (27%) said that it was “somewhat difficult.” More single-site centers (24%) than multisite centers (10%) described their process as “very easy,” and 5% of multiple site centers said the process was “very difficult.”

Expand Your Focus to Deliver Strategic Value

Just as the agent’s role has evolved in recent years, the contact center itself has undergone a dramatic transformation. Once largely tasked with cost-cutting and driving efficiency, today’s center is uniquely positioned to take on a more strategic role in customer-driven organizations. To deliver more value to the business, center leaders will need to look beyond the individual interactions that take place within their operations and focus on enterprisewide activities that impact the customer experience.

The full report includes survey findings on call routing strategies and methods for collaboration between agents and SMEs.

To download the report, visit blog.contactcenterpipeline.com/2016aps .

Susan Hash

Susan Hash

Susan Hash served as Editorial Director of Contact Center Pipeline magazine and the Pipeline blog from 2009-2021. She is a veteran business journalist with over 30 years of specialized experience writing about customer care and contact centers.
Twitter: @susanhash

Contact author

x
CXAS Customer Convenience 20240212
Upland 20231115
Cloud Racers
NICE 2024 CX Trends Action Guide 20240307
Verint CX Automation
LiveVox Proactive Outreach 20240307