The contact center industry has been performing what it calls “quality management” for decades. Yet the customer experience has not gotten demonstrably better, according to many analysts and surveys. Maybe that’s because the process isn’t actually a quality management process at all.
Quality management (QM) emerged almost 100 years ago in the manufacturing sector. The goal of QM was to ensure uniformity and adherence to specifications in manufactured goods. Effective QM programs relied upon statistical sampling at high confidence levels. In this way, management could be certain that all the machines producing parts and all the laborers assembling those parts into finished products were conforming to the very rigid specifications in place; that the finished products were identical.
In practice, QM works as follows. A specification for a widget is produced. It might look something like this:
- Made from oak.
- 7.25 inch long by 3.75 inch square.
- Has two 3/8-inch holes drilled at 3 inches and 4 inches as measured from the bottom.
Now suppose we have a manufacturing plant with 20 machines producing these widgets and that each machine needs to be precisely “tooled” to produce the widgets as specified. Further suppose that each machine can produce 70 widgets per day and all are being run five days a week. Thus, in one month, each machine will produce 1,400 widgets. How can we be certain that there are no variations between the machines?
That’s what statistical sampling is about. There are formulas to determine how many samples should be studied before you can conclude with some level of certainty that all the machines are producing widgets to specification.
The formula is:
n = NZ2 x .25
[d2 x (N–1)] + (Z2 x .25)
In this formula, “n” is the sample size required; “N” is the size of the population understudy; and “d” is the confidence level desired. Typically, good surveys use a 95% confidence level, so “d” would be equal to .05. Sometimes a confidence level of 90% is good enough to draw defensible conclusions, and “d” would be equal to .1. Finally, “Z” is the number of standard deviation units of the sample distribution that corresponds to the desired confidence level. “Z” equates to 1.6440 when the confidence level is 90% and equates to 1.96 when the confidence level is 95%.
Let’s use a 90% confidence level and plug in the population number of widgets produced in a day which is 1,400 (20 machines times 70 widgets a day).
Solving the formula:
Sample size = (1,400 x 1.64492) x .25
[.12 x (1,400–1)] + (1.64492 x .25)
Sample size = 64.5
We would need to draw 65 randomly selected widgets from each machine and carefully examine them for conformance with the specifications in order to be certain that each machine was producing identical widgets.
Quality Management in the Contact Center
The QM process worked marvels in manufacturing environments so it was only a matter of time until someone thought to apply the same methodology to services like handling customer calls. But a lot was lost in translation. What went wrong?
The problem is twofold. First, the typical contact center QM program samples somewhere between four and eight calls per agent per month. If we assume an agent can handle 70 calls per day that means they will handle 1,400 in a month. Using the same formula as shown above, the contact center QM team needs to sample 65 calls per agent per month. No contact center comes anywhere near that number. With such a small sample size, the likelihood of actually uncovering material deficiencies is astonishing slim.
Which brings us to the second problem—the specification. In manufacturing environments, the specification is detailed and extremely well-defined. In the contact center environment, the specification is embodied in the “call requirements” document. While there are parts of the typical call requirements document that are specific, such as whether the standard opening and closing were used, there are far too many elements of the specification that are purely judgmental and very much non-specific, such as “Used an effective tone of voice throughout the call” and “Took ownership of the call.”
Taken together, these two problems with contact center QM often relegate the process to an institutionalized form of nitpicking. It’s no wonder than some agents regard the process with a measure of disdain.
Bringing Customers into the Process
Contact centers have been recording interactions for decades and have always lamented their inability to listen to more conversations so as to learn more and understand better what customers want. Speech and text analytics coupled with customer surveying enable the contact center to understand their customers more completely than ever before.
Speech analytics technology, akin to magic, “listens” to every call and tags every call with any number of categories. In so doing, quality management is totally transformed because instead of “wrapper information” like the call length, the queue it arrived in and the agent that handled the call, the QM specialist has “content information” consisting of what the call was actually about, topics discussed and how the caller felt about the outcome.
Soliciting opinions from people in a systematic manner dates back to at least the 19th century, and probably much earlier. Essentially there are two kinds of surveys. One is the post-interaction survey and the other is sometimes referred to as a panel survey or cohort survey.
Post-interaction surveys tend to be offered to customers who have called into the contact center. These simple surveys typically seek to learn whether the customer’s problem was resolved yielding first-contact resolution information and to gain some insight into the courtesy exhibited by the agent.
Panel surveys, on the other hand, are conducted with carefully selected participants. For example, panels can be assembled consisting of top customers or new customers. The surveys typically explore topics of interest in some depth and require a deeper level of participation than post-interaction surveys. The point of panel surveys is to acquire insight into customer perceptions and attitudes.
So, the contact center QM process isn’t really about quality. But it is about something just as important—branding.
What a list of “call requirements” actually represents is the way QM professionals want the interaction to be experienced. It is, in reality, branding the interaction in a rudimentary fashion.
Brand management is the analysis and planning on how that brand is perceived in the market. Brand managers define tangible elements of the brand, things like the product itself, the look, the price, the packaging and intangible elements like the experience that the consumer has had with the brand, and also the relationship that they have with that brand. Brand management aims to create an emotional connection between products, companies and their customers and constituents.
Developing a brand is a necessary and appropriate internal activity undertaken by each company in their respective markets. But it is the consumer who perceives the brand, and what the consumer thinks and believes trumps whatever company employees may think.
Transforming Quality Management to Brand Management
Speech analytics and customer surveying applications combine to transform the contact center quality management process into brand management. It is obvious that directly tapping into the voice of the customer through direct surveying can reveal what they think and feel about the brand and what they like and don’t like about the products and service.
Clearly, “listening” to every call removes the sampling problem from traditional contact center quality management. Moreover, it greatly enhances the QM team’s ability to uncover material deficiencies in skill and knowledge. And, speech analytics can unburden a QM team from low-value, high-effort tasks, such as ensuring that identification routines and authority-required rote disclosures are faithfully rendered.
Brand management is what the misnamed QM team is really all about.