Take on any CX challenge with Pipeline+ Subscribe today.

Why Screening Harder Won’t Win – Part 1

Why Screening Harder Won’t Win – Part 1

Why Screening Harder Won’t Win – Part 1

There is a new approach to attracting and keeping talent.

Contact centers must deliver more than they ever have before. Interactions are more complex, customers are less patient, products are more configurable, and channels now span voice, chat (including video), email, and social platforms.

In response, organizations have steadily raised the bar for what “good” talent looks like. But there is a fundamental constraint the industry cannot technology its way around; the labor pool is not expanding at the same pace as skill expectations.

And when organizations respond by tightening requirements and screening harder, they may narrow the pool in ways that do not reliably improve performance: and ultimately leave too few candidates to fill open roles.

If the contact center industry had a recurring storyline, it would be this: the next technology wave is always expected to solve the talent problem.

The current version centers on AI. In McKinsey’s words, “Many are being inundated with solutions from AI vendors amid predictions that calls requiring human agent support will virtually disappear in the next few years.”

But the reality inside contact centers remains more complicated, because, as McKinsey also points out, “humans remain valued for their ability to handle complex and emotionally nuanced interactions, too.”

Automation and Talent

There has always been a conversation, a dialectic, between automation and people in the workplace, one that is now being highlighted with the practicality of AI. But in practice, automation does not (and never has) eliminate the need for talent. Instead, it changes what talent is needed.

As AI absorbs routine tasks, the interactions left for humans become more complex, emotionally charged, and consequential. That shift is raising the skills bar, even as labor markets continue to constrain supply.

At the same time, hiring challenges persist. Time to fill remains high. Attrition remains stubborn. And despite new tools, innovative technologies, and louder promises, contact centers are no closer to solving their talent problem than they were years ago.

Avoiding the Destructive Spiral

This tension raises an uncomfortable question. If expectations continue to rise, but the labor pool does not expand in parallel, what are the unintended consequences? And who is most likely to be excluded along the way?

The answer is not keeping the bar where it is or lowering it. The problem is narrowing the pool of candidates needed to jump over it through requirements and proxies that do not improve performance.

Such practices can disproportionately screen out capable candidates, shrinking the workforce pipeline until there are too few people to staff open positions. Which can then result in long queues, angry customers, and frustrated agents: risking their turnover that together may lead to a destructive spiral.

...the interactions left for humans become more complex, emotionally charged, and consequential.

In this article, which I have further divided into three parts, I focus on three skill areas where the hiring bar is rising fast: digital and AI fluency (Part 1 here), emotional intelligence (Part 2), and finally language skills (Part 3), which will have my conclusion of these points.

For each, I outline the unintended consequences of how organizations are currently screening for these skills. I then suggest a more sustainable approach: measuring skills directly and prioritizing readiness and ramp potential, rather than relying on background-based proxies that narrow the talent pool.

Importantly, this approach does not require complex technology or purchasing new tools. Instead, it requires clearer definitions of job skills and more structured and consistent evaluations.

Contact centers have always been skills-based jobs. Even in “entry-level” roles, success has depended on communication ability, problem solving, and emotional control under pressure. What has changed is the level and breadth of those expectations.

As routine work is absorbed by automation and as service interactions become more complex, organizations continue to raise the skills bar, often without fully accounting for how those higher requirements narrow the available talent pool.

Increasing Digital/AI Literacy Requirements

Digital skill expectations in contact centers have expanded rapidly in a brief period of time, initially driven by moving large contact center populations home in response to the COVID-19 pandemic.

To enable agents to work remotely, hiring and workforce readiness guidance emphasized the practical ability to function independently at home: reliable internet, appropriate hardware, headset quality, secure connectivity, and baseline troubleshooting skills without onsite IT support.

That focus has not disappeared, but the definition of digital literacy has changed. Today, digital literacy is not simply the ability to use a computer and follow a process. In many modern contact center environments, it now includes these abilities:

  • Navigating multiple systems and knowledge tools in parallel.
  • Interpreting policy and account information in real time.
  • Documenting accurately while maintaining customer rapport.
  • Completing authentication steps and compliance scripts correctly.
  • Switching between tools quickly under time pressure.

Omnichannel service further raises the bar:

  • Writing becomes part of the role, not an occasional task. Agents must produce clear, appropriately toned written responses in chats and emails, while also working within structured workflows and meeting quality expectations.
  • Video-enabled customer interactions introduce some additional skill demands, but, as I will explore in depth in a separate discussion (see BOX), not in the way they are often assumed.

AI adoption adds yet another layer. As AI copilots, automated knowledge systems, and AI-generated call summaries become more common, contact center talent is increasingly expected to demonstrate AI literacy as well.

This does not mean understanding how models work. Instead, it means knowing how to use AI-enabled systems effectively, including these abilities:

  • Asking strong questions and using query tools correctly.
  • Recognizing when an AI suggestion is incomplete, incorrect, or misapplied.
  • Validating outputs against policy and customer context.
  • Overriding automated guidance using human judgment when needed.

Clicking “accept” is easy. Knowing when to pause and verify is the skill.

A quieter risk is how organizations translate this shift into pre-hire requirements. When “AI literacy” becomes a hiring filter, employers often rely on resume-based proxies such as prior exposure to AI tools or “GenAI experience” listed in job history. But tool exposure does not equal safe and effective use.

In AI-enabled workflows, the representative becomes not only a customer advocate, but also a real-time quality control layer for automated support.

Consider two candidates:

  • One candidate has worked in an AI-enabled environment and lists AI tool experience on their resume, but they relied on the system passively.
  • The other candidate has no formal AI tooling experience, but they demonstrate strong learning agility, process discipline, and the ability to detect errors and apply judgment under ambiguity.

In a typical hiring process, the second candidate is filtered out early: even though those verification and judgment skills are the true predictors of performance in AI-enabled workflows.

But the unintended consequence is a narrower applicant pool shaped by access and prior opportunity, rather than readiness to succeed.

The scenarios requiring candidates with effective AI literacy are not theoretical:

  • An AI-generated call summary may sound polished, but it omits key details required for downstream resolution or compliance.
  • In other cases, an AI knowledge suggestion might point to the wrong policy or apply the right policy to the wrong customer context, requiring the representative to recognize the mismatch quickly.

In both situations, the agent’s job is no longer just following guidance. Instead, it is actively validating it.

This is why AI can raise, not lower, the skill requirements for many roles. In AI-enabled workflows, the representative becomes not only a customer advocate, but also a real-time quality control layer for automated support.

Many of these skills are developed through on-the-job exposure and coaching, particularly in environments that require navigating multiple systems, managing high interaction volume, and operating within tightly structured workflows.

Candidates who have not had access to those environments may possess the underlying capability to succeed but lack conventional signals of readiness. This can result in them unwisely being rejected.

Do Video-Based Roles Require Different Skills?

Video-enabled customer interactions require a unique set of additional skill demands compared with other channels.

Compared to voice or text-based roles, video increases visibility. Customers can see facial expressions, body language, and response timing in real time. This raises expectations around presence, attentiveness, and emotional control, making small signals more salient.

Video also changes what representatives are exposed to. Agents see customer reactions, environments, and emotional cues that would otherwise remain invisible.

This increases cognitive and emotional load. A key skill in video-based roles is not simply noticing these cues, but regulating one’s response to them, staying focused on the task, and avoiding overreaction to visual information that may be incomplete or misleading.

This added load may be experienced unevenly. For example, individuals working in a non-native language, or those already expending cognitive effort on real-time translation, language monitoring, or heightened self-regulation, may experience a higher overall cognitive demand when visual cues are added.

The issue is not lower capability, but the accumulation of simultaneous demands: language processing, emotional regulation, task execution, and visual interpretation.

In that sense, video places greater emphasis on emotional regulation, composure under observation, and the ability to maintain rapport while managing systems and information in parallel.

However, most of what drives success in video-based roles is not really new. The core requirements are the same: understanding customer needs, applying information accurately, exercising judgment, and managing emotion in demanding situations. Video amplifies these skill requirements rather than replacing them.

And while video became part of everyday communication during the COVID-19 pandemic, the familiarity, informality, and emotional openness in personal and internal meeting video calls do not translate directly to customer interaction.

Representatives must manage rapport, compliance, and visible emotional cues simultaneously while navigating systems in real time.

This distinction with skills requirements, and also formality and informality, also matters for hiring. Organizations sometimes respond to video by screening for “camera presence” or presentation style.

When those judgments are unstructured, they can drift quickly from job-relevant behavior into subjective impressions of polish or cultural familiarity.

A more effective approach is to define what video adds to the role and assess those behaviors directly. If anything, video makes the need for clear skill definition and consistent measurement more important.

The Flaws of Experience

As these requirements rise, hiring processes that rely heavily on resumes or prior job titles can unintentionally favor familiarity over potential. Resumes can indicate whether someone has been in these environments, but they rarely indicate whether someone was effective in them.

In other words, experience can be an imperfect proxy. It may screen out candidates with strong underlying capability who have not had the opportunities to show them, while screening in candidates whose prior exposure does not translate into strong performance.

...organizations should assess the specific skills that predict whether someone can become proficient quickly.

Skills-based testing is one evidence-based alternative that reduces reliance on resume-based proxies by replacing them with direct evidence. Rather than inferring capability from job history, organizations can evaluate job-relevant behaviors directly through simulations, work samples, and structured assessments.

Further, and importantly, and back to the discussion of AI skills, the goal should not be to confirm full mastery of AI-enabled workflows before day one. In most cases, that mastery develops inside the job.

Instead, organizations should assess the specific skills that predict whether someone can become proficient quickly. Namely learning agility, process discipline, attention to detail, and the ability to verify and apply information accurately in real time.

The hiring goal is not to find candidates who have already mastered the workflow. It is to identify candidates who have the prerequisite skills to master it quickly.

However, the result of relying on experience is not necessarily better selection, but narrower selection based on background rather than capability.

In practice, that often means favoring those who have had prior exposure to emerging tools over those who demonstrate the judgment required to use them well, particularly when organizations treat AI familiarity as proof of AI capability.

The question is: who does that leave out? Those candidates whose capability exceeds their resume.

When hiring systems rely heavily on experience thresholds, credential requirements, and subjective notions of readiness, they systematically disadvantage people who have the underlying skills to succeed but have not yet had access to the environments that signal those skills in familiar ways.

Here are two examples:

  1. Candidates who learn quickly, apply information accurately, and perform well under pressure, but who lack the conventional markers that screening systems are designed to recognize. In practice, this often includes early-career candidates, people from non-traditional backgrounds, career switchers, and those whose prior roles did not carry the right former job titles, even when the work itself developed relevant capability.
  2. Candidates who communicate differently in interviews or who do not conform to informal expectations of polish, despite being highly effective once expectations and workflows are clear.

The common thread is not lower ability but limited access to prior opportunity.

As skill expectations continue to rise faster than labor supply, excluding these groups is not just an equity concern. It is a capacity problem and organizations risk choking off the very pipeline they need to sustain performance over time.

The question is not whether standards should rise. They will. The real risk is mistaking exposure for capability and narrowing the talent pool based on signals that do not predict success.

April Cantwell

April Cantwell

April Cantwell, Ph.D., is Director of People Science at Harver, where she helps organizations turn hiring data into better decisions and better outcomes. For more than 20 years, she has worked at the intersection of applied research and real-world talent strategy, specializing in assessment design, workforce analytics, and practical, evidence-based hiring.

Contact author

x

CURRENT ISSUE: April 2026

Which Road to Take in Site Selection?

View Digital Issue

SUBSCRIBE SUBSCRIBE

Most Read

Artificial Intelligence

The AI Assistant-App Face-Off

Trends Gartner Checklist
Verint 300x250 20250116
NiCE Elevate AI General
Trends Forrester Report
WebEx 300x250
TCXA CCW Report