Take on any CX challenge with Pipeline+ Subscribe today.

Avoiding the 5 Training and Coaching Fails

Avoiding the 5 Training and Coaching Fails

Avoiding the 5 Training and Coaching Fails

The key reasons why these programs don’t work and what can be done.

Let’s face it, we have all been on the receiving end of some pretty lackluster customer service experiences.

Most can be described as tolerable at best, to downright infuriating at worst. While many organizations struggle to go from Good to Great, to reference another concept in Jim Collin’s award-winning book of that title, it’s time to confront the brutal fact.

If you impartially examine the status quo of operating contact centers and the fully loaded financial implications to the business, it’s hard to avoid that some of the most common practices, even some we think of as best practices, are not good enough for today’s customers.

Things have changed but many practices have not. Agents are handling more channels and systems, spread out over more hours, with customers who are less patient, have higher expectations, have often tried other channels unsuccessfully and are more frustrated, with more varied and unexpected issues. If you think that sentence was a run-on, it’s only to help mirror the pile-on felt by agents.

What this means is there are fewer easy answers, requiring more support resources, more problem-solving, more time, and more emotional intelligence than ever before.

While my upcoming book explores the need to re-think the whole operation, one key area that has major influence on our employees and customers is Training and Coaching.

This may seem like two areas, but it shouldn’t be. Training and Coaching are two sides of the same coin that should prepare agents for their jobs, update the knowledge they need, ensure adoption of changes, improve skill gaps, develop advanced skills, flow into career development, and more.

...trainees often don’t get a picture of what the job really looks like day in and day out.

This has major implications for the operation and the company’s bottom line. A well-trained and supported team is more proficient and engaged, which reduces errors and repeat calls, increases productivity, and lowers employee attrition.

Such teams then increase customer loyalty and advocacy, which contributes to lower customer acquisition spending and increased growth. Which company would not want that in today’s challenging new normal economic climate?

Here are 5 common practices that fail and how to improve them:

Practice 1. Training is often thought of as a new hire activity that the official “Trainers” are often exclusively focused on.

Once out of training, things that an agent needs to learn about and feel prepared for come from multiple sources both in and out of the contact center.

Unfortunately, most often there is no central process to intake this information, and so it falls into many unconnected paths that have more to do with where it came from and less to do with its importance, complexity, or urgency.

These include Team Leaders (TL) training, IT leaders, or other departments who often send it out by email or place it on shared drives or knowledge bases (KBs): with no live facilitation with opportunities for questions. Or in some cases, agents are hearing about it from peers or worse, customers.

In some companies, for small to medium complexity changes, the training team helps to create content for leaders to deliver in meetings or in small training sessions, or to be used in an online training module.

For big changes, the training team may even design and facilitate the training themselves due to their adult learning skillset.

However, most contact centers don’t operate this way.

If it’s not new hire training, the training team is often not involved and instead, these types of changes become part of an overwhelming number of messages and communications an agent needs to navigate and digest. Coming from different sources both in and out of the center and using different and often inappropriate modalities (email, KB, CRM updates, etc.).

Do Differently

Contact centers should stop looking at training as a team or an event. Instead, they should rethink their overall learning strategy to create an environment of:

  • Constant learning.
  • Knowledge accuracy excellence.
  • Advanced skill and career development.

It doesn’t mean Training now does everything, rather that all who contribute to the above are working collaboratively towards a cohesive strategy with clear and shared goals.

One thing you can start right away is to create an intake process for all information coming at the center from all sources. Assess it based on complexity, urgency, and scope of impact to determine the best modality to prepare agents.

If the information is simple and few customers are impacted, emails or shared sites may be sufficient. If it is complex and it impacts most customers, it might need proper instructional design and facilitation.

Practice 2. Knowledge base content is owned by a different group, sometimes a different department.

Knowledge is perishable. Different studies that plot out what’s often called the “Forgetting Curve” have shown that 70%-90% of training is forgotten if you don’t use what you’ve learned immediately after training and frequently enough to lock it in.

In today’s contact centers of high variability, unpredictability, and complexity of issues, it’s most likely at the highest end of that curve.

Do Differently

Knowledge resources aren’t just a tool to get answers for customers, they are a critical part of learning and proficiency. Today’s contact center agents need to be trained less on products and policies and more on navigating the unknown.

In addition to things like learning how to make customers feel heard, valued, informed, and have confidence in the center’s commitment to help, agents really need to learn how to use and be comfortable with the KB.

Whether you have a shared folder filled with documents or you have a real-time speech analytics artificial intelligence (AI) engine listening to calls and suggesting articles to agents, you must thoroughly train agents on the different scenarios in order for them to be comfortable with such tools.

As an extension of classroom training, the KB accuracy and usability need to fit into your centralized learning strategy with consideration for things like:

  • Language (will the KB article help or raise more questions and confusion?)
  • Format (is it scannable with priority elements easy to find?)
  • A method of auditing (is the KB out of date? Are there conflicting articles?)
  • UX (how easy is it to navigate and digest the right information at the right time?)

Remember, the main purpose of a KB is to help augment the overwhelming amount of things an agent needs to know so they can help customers with confidence. If any part of that experience is not serving that goal, you have work to do.

Practice 3: Training is measured by metrics like pass rates and in some centers, training surveys are done after training ends to rate the content and facilitation.

Part of the reason is Practice 1 earlier, where training is considered a new hire activity, and so this is how it is often measured.

Also, most training classes have training systems that are out of sync with production systems or do role-playing but due to large class sizes, most don’t get to participate beyond an unsupervised approach (and therefore without feedback).

Or there is a job shadow component but due to shifts, call arrival patterns, work-from-home, and technical issues, trainees often don’t get a picture of what the job really looks like day in and day out.

Lastly, as mentioned in Practice 2, knowledge is perishable. Given all of this, new hires are not in a position to evaluate new hire training at the time of its completion nor is this a measure of the broader learning and development imperative.

Do Differently

On a micro level, when thinking about measuring new hire training consider a broader set of ways to do this.

Surveys after training are great to measure things like the facilitator’s style or favored modalities. But not on things like the relevance of the content, the time spent on each aspect, or the preparedness of the agent. You can measure those in surveys after different intervals (one month later, three months later…you could go longer in some industries or environments).

You can also look at things that are more shared indicators, such as turnover in the first year and success measures like quality, CSAT, and others. If you are a larger team with many trainers or even many sites, you can break it out by cohort, facilitator, geography, queue, etc.

Warning, this is informative, not conclusive. Meaning, that training is one of many variables in these results so don’t blame training alone if QA scores are low. Rather you can see trends to help you to do more investigation to form conclusions you can then act on.

The above also helps you to look at your learning strategy on a more macro level. Otherwise, you are only focusing on the first leg of that journey.

Training, KB usage, quality monitoring (QM) scores, coaching surveys, FCR, CSAT, complaints, system usage, and more are all things to look at to see if your strategy is working and/or if you are holding true to it.

Practice 4: Quality Monitoring is often either a separate team or done by Team Leads (TLs) or by a combination of TLs and QM.

QM is one of the main ways to identify knowledge and skill gap trends that can be a great way both to recognize your agents and the impact of training and coaching.

However, in many contact centers, QM is seen more as an agent performance measure. While that is a large function it serves, viewing it that narrowly is why it’s often not considered part of a learning and development (L&D) strategy.

And because it’s rooted in manufacturing principles, it can be overly focused on catching and reacting to “defects,” which contributes to its common negative stigma.

Many centers focus heavily on making it so objective for scorers and for clarity of expectations, that agents become locked into the forms: despite the customer trying to take them elsewhere. “I wanted to help them but would have been dinged on my QA.”

Often called “talking to the QA form,” this practice is the antithesis of an L&D tool as it only serves to sustain the basic training that applies to the simplest interactions. It caps the development of advanced skills required like authenticity, that by their nature, are harder to fit in a form designed for pure objectivity.

Do Differently

Shift the perspective on QM as part of L&D. At the very least, it’s a listening post that can help you understand the effectiveness of your training, inform changes you need to make to training, and/or inform refresher training modules that should be created and/or leveraged.

To unlock even more potential, use QM as a form to identify the most advanced skills agents have to celebrate them and share their approaches to float all boats.

Also, constantly challenge the form itself as well as your own processes and policies that an agent may have bent in an immaterial way to help the customer.

One easy way to do this is to play previously scored calls to a group of leaders in and out of the center. Don’t use the scorecard. Instead, after each call is played, pose some of the following questions to the group:

  • If you were the customer, how satisfied would you be at the end?
  • Did you think the customer felt heard, understood, valued, cared for, and confident in the agent?
  • If a friend next to them heard only their side of the conversation, would the friend think it was a good experience, an ok one, or a bad one?
  • What would a conversation look like if their spouse said, “Hey, did you call ABC company today about that issue? What happened?” Think about the words they would use to describe it, the tone they would use. Would they be enthusiastic extolling the agent’s virtues or briefly say it’s been resolved or angrily go on a rant about it?

Be sure to really challenge the group to put themselves in the customer’s shoes, not the company’s and be brave to explore conflicting answers and challenge them to explain why they felt that way.

After you have done all the calls, only then share the QM scores they received. If you are being super honest, really try and forget things you know the agent did wrong based on the form or process but will have no material implications to the customer; 99% of centers will find some scores reflect a different experience.

Though some changes may be restricted in highly regulated industries, you will find many things you can and should change.

Practice 5: Coaching is mainly reactively triggered by an issue such as low QM score, missed KPIs, or behavioural issues.

Coaching is a key part of employees’ L&D, but in many contact centers it’s done for a very specific purpose by different people, with different skills and objectives. Even centers that want to proactively coach for advanced skill development, employee recognition, and career development find sessions cancelled due to service levels.

Coaching is key to a learning strategy that provides a sustainable highly productive workforce that delivers great customer experiences.

In so many cases coaching is actually perceived not as a learning tool but as a disciplinary tool (“Oh, they put you on a coaching plan? Sorry to hear that”).

Do Differently

The irony of this approach is that any short-term gains are unsustainable, and it often worsens the issues it’s trying to mitigate.

When proactive coaching is cancelled due to service levels, particularly with new agents or with anyone after changes are made, it actually slows adoption and proficiency leading to more errors, repeat calls, and longer interactions. All of which worsens capacity by making good service levels harder or more costly. Let alone the revenue impact of poor service.

Coupled with reactive coaching to address a performance issue, this approach also lowers employee engagement, causing reduced productivity and higher turnover. It puts you in a vicious loop of always facing capacity issues, cancelling proactive coaching, coaching to issues, and losing talent…over and over again.

Coaching is key to a learning strategy that provides a sustainable highly productive workforce that delivers great customer experiences.

Train your leaders on effective coaching that inspires versus deflates and create a policy that coaching cannot be cancelled beyond a minimum number per agent per month.

Based on contact centers with almost no external turnover and leading NPS, aim for one per agent per week with a minimum of at least two per agent per month. This allows you to cancel some coaching sessions if needed to manage tight intervals without falling into the vicious capacity cycle.

Neal is writing a new book entitled "CUSTOMER SERVICE IS BROKEN – Why Best Practices Are Hurting Customer and Employee Experience and How to Fix Them.” which is planned to be released in 2023.

Neal Dlin

Neal Dlin

Neal Dlin is a Human Experience (HX) award-winning executive, consultant, keynote speaker, and executive coach, Vice President of Customer Service Experience at Aviso Wealth and president of Chorus Tree, Inc. His successful and status quo-quashing approach has helped organizations transform their operations through the lens of our most common human needs.

Contact author

x

Most Read

GartnerMQ
Upland 20231115
Cloud Racers
Amazon Connect 20240826
Customer Effort Index
Verint CX Automation
RLZD Gartner Motivate Report