Take on any CX challenge with Pipeline+ Subscribe today.

The Evolving Voice Security Threats

The Evolving Voice Security Threats

/ Operations, Technology, Artificial Intelligence
The Evolving Voice Security Threats

And how contact centers can defend themselves.

There is nothing particularly new nor surprising about fraud attempts leveled against contact centers. After all, these customer-focused operations are also direct gateways into user accounts and organizational data: both irresistible draws for cyber-thievery.

What is new, however, is the seismic shift in attack targets, tactics, and tools converging into one persistent cybersecurity threat vector. One that, in many cases, is defying traditional approaches to fraud detection in the contact center.

Security, notably voice security, has as a result reached critical mass and has become a top business issue which must be understood and addressed.

Favorite Target: The Voice Channel

The contact center voice channel, long under-recognized as a potential avenue for cyberattacks, as noted by our analysis, has emerged as a favored pathway for criminal intruders.

Hijacked accounts, network breaches, exfiltrated data, and ransomware payouts are just a few of the lucrative “deliverables” gained from a successful voice-oriented attack. While financial gain is the ultimate driver for most of these schemes, if a data breach is involved the additional fallout for the organization can be enormous, according to reports like those from Statista.

Voice Attacks on the Rise

Voice-based attacks on the call center have been on an upwards trajectory over the past few years.

According to a cross-industry survey highlighted in TransUnion’s “2023 State of Omnichannel Authentication” report on call center fraud, more than half of respondents reported an observable rise in attacks on their call centers since 2021. Within the financial sector, that response soared to 90%, with a large portion of those respondents estimating a steep 80% rise over the past two years.

Note that in 2021, respondents pointed to the web and mobile channels as the primary targets of attack. Today, according to the 2023 report, those channels account for a mere 10% of attack attempts as cybercriminals have shifted to the more lucrative and, apparently, more breachable, voice channel.

Understanding Top Voice-Based Threats

So, what exactly is it about the contact center voice channel that makes it so susceptible to cyber attack? Here are just a few of the voice channel’s uniquely exploitable elements.

1. Call Spoofing: Anonymity is just too easy.

A voice caller is invisible, identified only by the data associated with the digital call footprint and the content of the voice interactions. Criminal scammers will obviously say they are someone they are not. With call digitalization through VoIP (Voice over Internet Protocol) calling, these threat agents can also easily hide their digital identities with simple call “spoofing” technology.

A spoofed call is one where the caller ID number and/or caller name (CNAM) has been altered to mask the actual source of the call. While spoofed numbers are sometimes used for legitimate purposes to protect a caller’s privacy, call spoofing is a common hallmark of robocalls and scammers.

While the Federal Trade Commission (FCC) has stepped in requiring carriers to certify that the caller IDs of calls they pass through match the actual call sources (known as the STIR/SHAKEN protocol), STIR/SHAKEN has, to date, fallen far short of eliminating the practice of call spoofing.

2. Social Engineering: Psychological manipulation has become a business.

Voice-to-voice communication is highly effective at addressing urgent or complex customer service issues, which is why the voice channel is still highly favored by customers looking for a quick resolution to their most pressing problems.

At the same time, voice communication is also infused with emotional elements that are not present in text-based interactions. And that is why voice appeals to today’s new breed of criminal impostors who are particularly skilled at using that human-to-human contact to psychologically manipulate their targets.

Usually armed with personal information harvested from social media or stolen from prior hacks to reinforce their deceptions, they are very good at gaining the trust of their victims: and then tricking them into divulging protected or otherwise sensitive information.

3. Vishing (Voice Phishing): The new star in the ransomware event.

By now, pretty much anyone with an email account knows about phishing. These are fraudulent emails sent to a broad range of addresses “fishing” for recipients who might click on a virulent link which then takes them to a scam website or, worse, downloads malware onto their computers or phones.

Advanced email filters, combined with widespread public awareness, has helped thwart phishing schemes. And so, cybercriminals have now taken phishing tactics to the voice channel.

A vishing (voice phishing) attack is often part of a broad campaign that starts with an auto-dialer hitting a range of numbers.

If any of those calls is answered, it connects the call recipient to a criminal agent posing as a legitimate contact: who then applies social engineering manipulative tactics to extract personal information or credentials used to steal funds, breach accounts, or launch ransomware schemes.

The contact center voice channel, long under-recognized as a potential avenue for cyberattacks, as noted by our analysis, has emerged as a favored pathway for criminal intruders.

Vishers can also target specific, high-value individuals, which makes contact center agents, with their access to customer records and network credentials, so alluring.

A voice phishing campaign targeting a contact center often starts with a pattern of “reconnaissance” calls to identify susceptible targets, followed by more direct, orchestrated calls from live threat agents posing as customers or other trusted sources.

We’d like to think sophisticated organizations would be savvy to these approaches and can easily recognize and fend off vishing deceivers. However, the recent history of successful vishing attacks on a string of high-profile organizations including, as widely reported, Caesars, MGM, Robinhood, and Twitter (now X), has proven otherwise.

GenAI: Rocket Fuel for Voice Cybercriminals

Since late 2023, the problem (of voice-based attacks) has become exponentially worse with the widespread availability and application of artificial intelligence (AI)-powered software.

Generative AI (GenAI), the technology behind the recent burst of high-tech, synthetic content-generating applications like ChatGPT and Prime Voice AI, is now in the hands of criminal impostors. And they are only too happy to fortify their deceptions with AI-generated scripts, deep-fake images, and cloned voices.

Drawing from the vast repository of stolen, personally identifiable information (PII) for sale on the Dark Web (eight billion records and growing according to Fraud.net), these voice scammers, vishers, and phishers can now amplify their deceptions. They are using GenAI technology to quickly gather intel on high-value targets, create false identities, adapt their behaviors, and circumvent detection.

What’s more, rogue developers are also likely creating their own illicit GenAI tools that can quickly write malware code for more potent, infinitely adaptable ransomware used by cyber-intruders intent on extortion.

...security-centric contact centers...may utilize sophisticated voice biometrics applications trained to analyze specific qualities in a live caller's voice...

As detailed in the 2023 “I Chatbot” Recorded Future cybersecurity research report, the broad availability of open-source (freemium) GenAI platforms like ChatGPT and Prime Voice AI “lowers the barrier to entry for low-skilled and inexperienced threat actors seeking to break into cybercrime.” That is because of these tools’ easy, out-of-the-box functionality.

Not only are threat agents now able to exploit these new tools for criminal gain, but the ease of access and “no training needed” functionality of GenAI applications is spawning a whole new crop of aspiring hackers, vishers, and voice scammers eager to test their new capabilities on human targets.

As reported by telecom cybersecurity and software developer Enea, the fact that phone-based attacks have seen a staggering 1,265% increase since the November 2022 launch of ChatGPT is likely no coincidence.

How to Secure Voice: It’s about a Multi-Layered Defense

Now armed with technology that can clone voices, create adaptive scripts on the fly, alter caller IDs, automate detection-evading IVR interactions, and harvest authentication-busting personal data, threat actors are finding ever more effective ways to evade conventional fraud detection mechanisms.

This calls for a reexamination of the operation’s current fraud detection capabilities and the introduction of new technologies that can be layered into existing systems to further close security gaps in voice fraud detection and mitigation.

When looking at a complete voice channel security approach, systems architects should consider the following three components (See Chart 1).

1. Voice Traffic Filtering Firewall

Using sophisticated analysis of signaling data enhanced through machine learning and AI, a voice traffic filtering system, applied at the call onset, can filter out/redirect the vast majority of clearly unwanted calls before they enter the call flow or reach an agent desktop.

This front line of defense not only relieves the burden of unwanted calls impacting IVR routing, but also enhances the efficiency of other, downstream call verification/authentication processes. More importantly, effective call filtering significantly relieves agents from the disruption of KPI-impacting calls while protecting them against exposure to criminal contact.

The more advanced voice traffic filtering solutions additionally include “do no harm” failsafe features that assure live callers are not inadvertently blocked due to false positive filtering responses. This is especially important in call centers as their mission is to engage with customers.

2. Call Authentication

Call authentication within the call flow includes a number of data-based validation processes. These include automatic number identification (ANI) validation, Caller ID authentication with STIR/SHAKEN call spoofing detection, validation of call data against CRM records, and voice audio matching against stored customer voice data.

Call authentication has generally been used to support frictionless authorization of known customer callers (optimizing customer experience [CX]), but it can also serve to separate legitimate calls from those indicating potential fraud.

Call authentication processes may include more advanced analysis of call patterns and machine learning used to detect, and flag, behaviors outside of the norm consistent with illicit activity.

3. Fraud detection

Fraud detection is the last line of defense for calls that reach the agent desktop. And to be effective it may need to be upgraded and reinforced to prevent attacks from getting through.

Historically, agents used knowledge-based authentication (KBA) questioning, such as the last four digits of a Social Security number or repeating a personal PIN or password, to separate legitimate callers from impostors.

Clearly, this form of manual vetting adds friction to the caller experience but, more significantly, it is easily fooled. This report from Pindrop notes that fraudsters speaking to a live agent are able to pass KBA verification 40 to 60% of the time.

...an adequate defense is now in a multi-front battle requiring continuing awareness, vigilance, and well-aligned, adaptable technologies...

Today, security-centric contact centers, particularly those in the financial sector, may utilize sophisticated voice biometrics applications trained to analyze specific qualities in a live caller’s voice as they interact with an agent.

These “audio signatures” are run against a database of audio samples of known fraud callers. And, if a match is found, it will deliver an agent alert.

In their most sophisticated form, these fraud detection applications go beyond database matching. They now utilize predictive AI and machine learning to detect even the most subtle anomalies consistent with fraudulent activity.

Best Practices: Now Available for Voice Security!

We conducted a survey in 2022 and 2023 of 300 enterprise telecom experts, many in the security arena, to find out more about their positions regarding voice threat defense.

While 85% of respondents agreed that it’s time to elevate voice as a true threat vector, there was little consensus as to who, or what areas of their organizations, are responsible. One in five respondents, in fact, admitted to having no idea.

Those insights spurred the development of the Foundational Best Practices for Voice Cybersecurity. Because voice security is a burgeoning threat vector, these best practices provide additional guidance for the organizational business units most impacted by voice-based threats: IT, Risk Management/Cybersecurity, and Contact Center.

Here is a quick list of the 16 best practices.

  1. Acknowledge the threat vector.
  2. Get educated on the attack surface.
  3. Move beyond misperception.
  4. Understand legal, regulations, and compliance.
  5. Track trending legal actions.
  6. Understand current and potential impact/risk.
  7. Allocate budget.
  8. Integrate policy and risk management.
  9. Improve your security framework.
  10. Do No Harm mandate.
  11. Intelligent custom rules.
  12. Leverage a voice firewall.
  13. Leverage secure voicemail.
  14. Leverage voice telemetry.
  15. Leverage call filtering.
  16. Leverage enhanced call control.

Summary: Understand, Architect, Protect

Of course, this is not the final word when it comes to protecting the contact center voice channel from persistent and evolving threats. It is, however, essential to understand that an adequate defense is now in a multi-front battle requiring continuing awareness, vigilance, and well-aligned, adaptable technologies that, together, can create an impervious shield against a new breed of AI-empowered attackers.

Vicki Sidor

Vicki Sidor

As Vice President, Sales and Channel for enterprise voice security software developer Mutare, Vicki Sidor guides the company’s initiatives and strategies related to voice network performance and security that help clients identify and solve complex business challenges impacting revenue, efficiency, and compliance.

Contact author

x

Most Read

Workplace Environment

Gracious Gratitude

HR Block
Upland 20231115
Cloud Racers
Five9 20240826
CX AI Realized Microsite
Verint CX Automation
MPOwer