We’ve analyzed quality assurance (QA) from every possible angle: scoring models, calibration sessions, and compliance metrics. But how often do we look at QA through the eyes of the people it impacts most: the agents?
This is what I will focus on in this article. I will be exploring QA and training in the August issue and how QA data can be turned into insights in the October issue.
QA isn’t about checking boxes or filling quotas. It’s about ensuring consistency and driving improvement. At its best, QA reinforces what’s working, pinpoints what isn’t, and creates a continuous cycle of learning that strengthens the agents, the team, and ultimately the customer experience (CX).
Yes, today’s technology can analyze tone, detect politeness, and highlight friendly exchanges between agent and caller.
That’s useful, but it’s not the whole picture.
I’ve reviewed calls that looked flawless according to AI sentiment, only to uncover coaching moments after listening to them myself.
QA isn't about checking boxes or filling quotas. It's about ensuring consistency and driving improvement.
On one call, for instance, the agent politely told a member, “If you place me on hold, I’ll have to disconnect.” The tone was calm, but the message missed the mark.
It reminded me that no algorithm (at the time of this writing) can replace human understanding: the empathy and discernment that come from real listening.
I say that not just as a call center leader, but as someone who started on the phones.
My First QA Experience
I’ll never forget my first bad QA score. It was 2008, and evaluations were still done on Excel spreadsheets.
The analyst handed me a printed copy of the spreadsheet - a number at the top, a list of misses below - and nothing else. My “errors” were vague; I missed the greeting, didn’t educate the customer, and provided inaccurate information.
But here’s the thing: I had greeted the caller. I was confident that I had provided the correct information. And I desperately wanted to know what I had supposedly done wrong so I could fix it.
But the form didn’t tell me that. Instead, it only left me frustrated and full of questions.
Most of my colleagues at the time shrugged off poor QA scores, since at the time they didn’t affect pay or bonuses, so why care?
But I couldn’t let it go. I needed clarity.
So, I approached the QA analyst - carefully, trying not to sound defensive - but eager to learn. That conversation evolved into a 30-minute-long session, during which she walked me through the details of my call. It was my first month on the phones, and that meeting changed everything.
I later realized that the call had come through on the Indiana Care Select line, but I had greeted the caller with, “Thank you for calling Hoosier Healthwise.” That’s where I lost points, both for using the wrong greeting and for failing to provide the caller with the correct information.
Regarding the caller education section, I had overlooked informing the caller that the first 10 transportation rides are available to them without prior authorization.
Interestingly, about a year or two later, the QA form was updated, especially after quality scores began to directly impact employee bonuses and pay increases.
From that day forward, I knew what “good” sounded like. I had a clear picture of what success looked like, and I worked hard to ensure every call was worthy of a 100% QA score. Not because of the score itself, but because I understood what mattered.
That early experience taught me a lesson I carry with me today. QA without context is just a number: we feel good when we receive a good score and feel frustration when we get a bad one.
But QA with feedback, coaching, and a little humanity becomes a tool for growth: for the agent, the team, and the customer.
The Agent’s View of QA
If you spend time in call center communities on Reddit or Facebook - or sit down with your agents in a coaching session - a familiar theme emerges: the word “QA” rarely feels like it stands for Quality Assurance. To many, it feels more like “Quality Apprehension”.
Over the years, I’ve heard the same frustrations surface again and again.
- Nitpicking: Agents are penalized for minor checklist oversights that have little to no impact on the CX.
- Lack of context: QA scorecards often note deductions without clear explanations or examples of what went wrong.
- Punitive culture: QA feels like a tool for punishment rather than growth. A score to survive, not a conversation to improve.
On online forums, agents often describe QA as “the department that waits for you to mess up.” One Reddit user summed it up perfectly: “They call it coaching, but it feels like a report card I didn’t ask for.”
When quality reviews prioritize error detection over effort recognition, agents begin to disengage. They start doing just enough to avoid getting flagged: checking boxes instead of connecting with customers. Over time, this mindset breeds frustration, detachment, and eventually burnout.
And here’s the real irony: these are the very people QA was created to support. When the process adds pressure instead of perspective - when it creates fear instead of feedback - we’ve missed the mark.
A truly effective QA program should leave agents thinking, “I learned something today,” not “I hope I don’t get another bad score.”
QA Feedback Template
Here is a handy sample template that you can use and adapt for your call/contact center.
- Where: At the 06:12 (call closure) mark.
- What (objective): Did not provide the call reference number to the caller.
- Why it matters: Without the reference number, customers can’t track progress, and it helps to build trust.
- How to improve: Say: “Your reference number is 12345. Keep it for follow-up.” Tip: add a sticky note to remember to provide the caller with the call reference number at the end of the call.
- Action (if repeated): First instance, coaching noted. Repeated, verbal warnings and refresher micro-training.
What Quality Should Be
For agents, QA’s job is to make work transparent and fair, to help improve the agent’s performance, not to create a “gotcha!” moment. When we design QA with the frontline in mind, it changes how agents respond to each evaluation: from fear and box-checking to curiosity and growth.
When we design QA with the frontline in mind, it changes how agents respond to each evaluation...
Here are a few core elements a QA program must deliver to really serve agents.
Regular calibration so everyone evaluates the same
Calibration is where QA teams, leadership, and sometimes the agents listen to the same calls and align on how they’ll score them.
Do this regularly - ideally weekly - to keep evaluators consistent, catch drifts quickly, and respond to changes (such as products, services, or policies) before they cause confusion within the team. Weekly calibration also helps QA staff surface ambiguous rubric items so you can fix them fast.
Be specific, timely, and subjective in the right way
Feedback should be delivered while the call is still fresh in the agent’s mind, ideally, within a few days, not weeks or months later.
A QA from early in the month (or worse, a quarter ago) would make the agent question, “Why is this being brought to my attention now?”
Every piece of feedback should include three things.
- Where the issue occurred. Identify the exact timestamp or section of the call (for example, two minutes and five seconds into the call).
- What went wrong. Provide an objective description of the behavior (for example, the agent placed the caller on hold without obtaining permission).
- How to improve. Offer a clear, actionable suggestion (for example, before placing a caller on hold, always ask for permission, explain the purpose, and provide an estimated hold time).
This balance keeps feedback subjective in tone - personal, human, and constructive: while still rooted in objective evidence from the call. Agents appreciate this level of detail because it feels fair and actionable rather than vague or punitive.
One-on-one coaching for wins and misses
I feel that feedback shouldn’t be reserved only for when something goes wrong. Schedule one-on-one sessions that celebrate great calls as well as those that require corrections. Hearing “here’s what you did well” boosts an agent’s morale and makes corrective coaching feel less threatening.
I’ve noticed that using the actual call during the session helps the agent to understand what needs tweaking faster than reading about it in the QA form notes.
Make scorecards teachable, not just “tallyable”
Scorecards must do more than add up points. Add a feedback field to each scored item so the agent always knows why they lost points on that item and how to improve while it is in front of them, top-of-mind: rather than finding out at the end of the QA form.
Keep the rubric tightly focused (five to seven high-value items) and pair each numeric score with a short, specific coaching line or a suggested phrase the agent can use on the next call. A clear, concise comment is far more helpful than a vague “missed greeting” line.
Apply progressive coaching: warn, don’t punish (unless necessary)
I like to give agents the benefit of the doubt. Where most performance issues stem from training gaps, not willful misconduct, start with coaching and corrective notes. If this behavior persists, move to a verbal warning, and finally to written warnings according to your progressive discipline policy.
For small misses (e.g., forgetting to read a call reference number), an initial friendly warning with clear expectations and tips for remembering (like a sticky note on the agent’s monitor), plus a reminder that continued misses will affect scores, often corrects the problem without damaging morale.
However, not all issues can - or should - begin with gentle correction. For serious matters, swift and decisive action is essential, particularly when patterns or trends emerge. Here are some examples.
- Agent rudeness or hostility to callers (including the use of profanity).
- Unprofessional conduct (such as laughing at a caller).
- Falsifying information.
- Failure to comply with legal or regulatory requirements (such as improper disclosures, call authentication failures, and data security breaches).
These situations may bypass progressive coaching altogether and move directly to formal investigation and disciplinary steps. These can include written warnings or separation from employment, depending on the severity of the incidents and company policy.
QA plays a critical role here by accurately documenting incidents, preserving call evidence, and escalating concerns through the appropriate compliance and HR channels.
This becomes especially important when agents are issued written warnings or are exited due to serious quality or compliance violations. These processes ensure accountability while protecting customers, the organization, and the contact center’s integrity.
The goal isn’t punishment: it’s protection, prevention, and providing exceptional customer service. Clear boundaries reinforce professionalism and safety, while coaching remains the primary tool for development when performance issues stem from skill gaps rather than misconduct.
Involve agents in building the rules
Bring experienced agents into rubric design and calibration sessions, which makes agents feel included in decision-making while also improving morale. When agents help define quality, they’re more likely to trust it and apply it, and they become catalysts for acceptance among peers.
Co-creation also uncovers real-world nuances that evaluators and leaders might miss, like effective workarounds for system issues or phrases that help build rapport but don’t match a script.
Use multiple signals, not just one QA score
A single QA score doesn’t tell the whole story. Before deciding that an agent needs corrective action, it’s essential to look at the bigger picture. Like how do their QA results compare with customer satisfaction (CSAT) scores, first call resolution (FCR), or repeat-contact rates?
One low score might reflect a challenging caller or a system issue rather than poor performance. For example, I recall an agent who received a few low QA scores for sounding rushed with callers.
The leadership team wanted me to issue a verbal warning to the agent because it was happening consistently. However, when they looked at her CSAT scores and the voice notes taken by the agent from the callers, the callers felt she gave them what they needed quickly and was professional.
Further, the team noted that she took more calls than her peers in her training class. When multiple signals were considered, the feedback shifted from punishment to a coaching conversation, helping her balance tone of voice and empathy with speed.
Selecting an appropriate sample size
Evaluating just a few calls a month doesn’t paint a complete picture of an agent’s performance. When too-small samples are selected, such as two calls per month, it can lead to unreliable conclusions; one tough call can unfairly skew a score.
A better approach is to set a consistent sampling plan, such as reviewing a fixed number of calls per agent each week or month. Also, ensuring the calls represent different channels, shifts, and call types.
It’s also a good practice to create a log of the agent’s scores for each month and to place the average score on their monthly scorecard.
AI and QA
Thanks to AI-powered QA platforms, many modernized centers can now audit 100% of interactions - voice or digital - rather than relying on small, randomized samples of agent interactions.
But full coverage doesn’t mean infallible coverage. AI tools excel at detecting keyword omissions, authentication slips, or script deviations. But they sometimes miss things when tone of voice, accents, context, or authentic empathy are at play, which are often what make or break an interaction.
I believe the most effective QA programs should use AI for broad coverage, flagging calls and directing them to human reviewers for validation and coaching, especially on complex calls.
Create a Dispute Process
Of course, even the best-designed QA programs will face disagreement, which makes a clear dispute process essential.
Sometimes, an agent may feel that a score or mark-off doesn’t reflect what really happened on a call, and that’s okay. What matters is having a clear, respectful way for agents to dispute or question a QA result.
Why it matters
When agents know they can challenge a score fairly, it builds trust in the QA process. It shifts QA from a judgment to a partnership.
Here’s how to implement a dispute-handling process.
- Document the process. Clearly outline how an agent can submit a dispute, whether it’s through a form, email, or your QA system. Keep it simple: who to contact, what information to include (e.g., call ID, reason for dispute), etc.
- Set a review window. For example, agents can raise a dispute within seven business days of receiving their QA scores. This ensures timely reviews while keeping everyone accountable.
- Establish an escalation path. Disputed calls should be re-evaluated by a neutral QA reviewer or team lead who wasn’t involved in the original scoring.
- Close the loop. Once the review is complete, communicate the final decision and explain it, even if the score remains the same. Agents appreciate knowing why.
- Track dispute trends. If multiple agents dispute the same rubric item or feedback type, it may signal unclear QA criteria or inconsistent calibration: something worth revisiting in your next QA review session.
Remembering QA’s Purpose
QA has a clear purpose: to uncover opportunities for improvement and support agents in delivering their best work.
When done right, QA isn’t a policing tool. It’s a platform for growth: a mirror that reflects both strengths and learning opportunities. It’s about creating a culture where agents feel valued, motivated, and inspired to deliver their best.
Because in the end, QA isn’t just about measuring quality: it’s about creating it.