Organizations, no matter the type and operation, have always had to find ways to ensure the security of their customers and their employees, and comply with laws and regulations mandating them, while providing quality customer experiences (CXs).
Oftentimes there are trade-offs, with sometimes customer-friendly, but also sometimes obtrusive and time-consuming, methods. Like the anti-theft tags in stores, electronic screeners at entrances, and yes, like the multiple verification steps before engaging with contact center agents.
With business going online, so are the criminals who have been executing increasingly sophisticated cyberattacks. At the same time, customers have rising expectations for excellent CXs.
To explore this issue further, with advice for contact centers to help meet these critical needs, we had a virtual conversation with James Laird, Director, Research & Innovation, Verint.
Q. Describe the cybersecurity lay of the land. Is it becoming more or less risky for customers to engage with companies? For contact centers, including through their agents, to engage with customers? Companies and customers who could be bad actors?
The risk is rising because enterprises secure their digital perimeters while leaving contact centers exposed.
Most organizations run advanced cybersecurity: endpoint detection and response (EDR), security information and event management (SIEM), and zero-trust. Yet their contact centers – handling millions of interactions – sit outside these defenses.
This disconnect makes contact centers vulnerable to attackers who socially engineer agents, bypass multifactor authentication (MFA), and/or extract personal data to penetrate wider systems.
AI-powered deepfakes and synthetic identities amplify the threat:
- Attackers can now clone a customer’s voice from seconds of publicly available audio.
- Synthetic identities blend real and fabricated data to pass traditional know-your-customer (KYC) checks.
The contact center is a prime target because emotional manipulation and urgency can override procedural safeguards.
An agent under pressure to deliver good CX shouldn’t be expected to detect a convincing deepfake. Instead, AI-powered detection needs to sit alongside the threat, analyzing conversational patterns and behavioral signals in real time.
There are also “fattening the pig” — or “pig butchering”— scams where organized crime networks impersonate genuine contact center agents to build deceptive relationships with victims over time before executing large financial thefts. Cross-interaction analysis can help uncover them early.
“Most organizations run advanced cybersecurity...Yet their contact centers – handling millions of interactions – sit outside these defenses.” —James Laird
This is an arms race. Smart suppliers are investing heavily in R&D and working with best-of-breed partners to ensure customers benefit from layered, continuously evolving defenses.
Closing this gap means integrating contact center intelligence into the wider cyber ecosystem so that security operations center (SOC) teams, fraud teams, and contact center teams (including the agents) operate with shared insights.
Q. Let’s discuss specifics. What are, and rank, the top five risks for contact centers?
The top five risks for contact centers are:
- Soft Fraud (First-Party Fraud). Genuine customers inflating insurance claims, filing false chargebacks, exploiting return policies, or exaggerating damages. Often rationalized as “harmless” but these cost billions annually.
- Agent-Facilitated Hard Fraud. Social engineering attacks manipulating well-meaning agents into bypassing security protocols, enabling account takeovers (ATOs) or unauthorized transactions.
- Credential Compromise. Mass ATO attempts using breached password databases, combined with social engineering, to bypass authentication.
- Insider Threats. Employees accessing or modifying customer data inappropriately, either maliciously or through negligence.
- Regulatory Noncompliance. Failure to detect and report fraud patterns leading to regulatory penalties and reputational damage.
Aging Customers, Rising Risks
Older individuals have always been vulnerable and have fallen prey to criminals. And it is no different today, online and over the phone.
So, we asked James Laird, “What measures can companies and contact centers take to help protect them from cyberthieves?”
“Vulnerable customers benefit from stronger detection and more empathetic intervention,” says James. “Behavioral analytics can flag unusual account changes, repeated contacts, or rapid-fire transactions.
“Real-time AI coaching helps agents notice indicators from customers like confusion, third-party influence, or emotional distress and guides them to slow down, verify independently, or escalate. High-risk transactions should trigger enhanced checks even if standard authentication passes.
“By combining metadata with speech analytics, unusual patterns emerge,” reports James. “Like a customer changing beneficiary details within a window of a significant withdrawal, for example.
“I recently saw one of our customers do exactly this: the right data coupled with genuine agent empathy prevented a significant loss,” says James. “That’s how you earn trust.
“Trusted contact programs and cooling-off periods add extra protection. New approaches, such as scam detection, disruption, and intelligence platforms, show how layered defenses can meaningfully reduce harm.”
Q. What strategies are working? Which ones are no longer effective?
The most effective strategies today are dynamic and AI-driven:
- Conversational analytics can spot rehearsed narratives or inconsistencies in real time.
- Behavioral biometrics flag unusual intelligent virtual assistant (IVA) or navigation patterns.
- Agent-assist tools surface fraud indicators instantly without disrupting customer flow.
- Cross-interaction analysis uncovers patterns individual agents can’t see.
What no longer works: knowledge-based authentication alone, rigid verification scripts, and expecting agents to detect sophisticated threats unaided. Static, checklist-based defenses simply can’t keep pace with attackers who adapt quickly.
Q. Is AI a security asset or a threat? What is your assessment of it on balance?
AI is both, but on balance it’s a powerful security asset when used responsibly. As I discussed in response to your first question, threat actors can now generate deepfakes or synthetic identities at scale. But defenders can analyze patterns, behaviors, and anomalies far faster than humans alone.
The OODA Loop – Observe, Orient, Decide, Act – matters more than ever. Organizations that quickly cycle through it, using AI to accelerate each stage, stay ahead.
My policing experience taught me criminals always adapt; AI just raises the tempo. The winners will be those that match that speed with shared intelligence, continuous monitoring, and rapid response.
Q. You raised insider threats. Are you seeing contact center agents becoming more vulnerable, either willingly as criminals or through threats, like blackmail?
While insider threat investigation isn’t our core remit, we see warning signals through our interaction analytics: and the industry should be paying attention. We’ve seen (and continue to see) published reports like the following:
- In 2025, a major cryptocurrency exchange lost up to $400 million after offshore support agents were bribed to extract customer data (BBC).
- A late 2024 investigation revealed frontline employees at several U.S. and Canadian banks selling client data via messaging platforms (Bloomberg News via New York Post).
- A major UK retailer saw over nine million records exposed through compromised contractor credentials (BBC).
Q. What new cybersecurity-related legislation and regulations have been enacted and are forthcoming in the U.S., Canada, and in other countries and regions?
Globally, regulators are shifting toward resilience, transparency, and AI governance:
- In the U.S., SEC rules now mandate faster breach disclosure and privacy laws like the California Privacy Rights Act (CPRA) are expanding.
- Canada is looking to modernize its privacy legislation with new oversight on AI.
- The EU is rolling out the Digital Operational Resilience Act (DORA) for operational resilience, the AI Act for risk-based AI controls, and NIS2 for broader cybersecurity requirements.
- The UK is tightening telecom and online fraud protections.
- Across APAC, countries like Australia, Singapore, and China are strengthening privacy and critical-infrastructure rules.
The trend is clear: more accountability, more reporting, and more scrutiny of AI-enabled risk.
Q. What are your recommendations to contact centers to keep their customers and employees secure and their operations compliant?
Security shouldn’t come at the expense of trust or CX. Companies need confidence that callers are genuine: and customers need to trust the organizations protecting their data.
“Disengagement is a vulnerability; investing in agent experience is a security strategy as much as a CX strategy.”
The most effective approach is end-to-end: verify callers before they reach agents, support agents in real time, and analyze interactions afterwards to strengthen defenses. Avoid rigid scripts or putting all responsibility on agents: they need guidance, not pressure.
AI can help across every stage when it’s transparent and accountable. We continuously monitor the evolving threat landscape and close the gaps that allow risk to permeate, with AI-powered bots detecting risk before, during, and after every call.
But the best defense is people. It would need to begin with HR in partnership with security screening applicants, particularly those for prime target contact centers like (but not limited to) those in financial services. Agents would then need to be trained to spot risks and on the procedures to handle them.
Equally importantly is supporting the agents. Those who feel equipped and aligned to company culture are inherently less susceptible. Disengagement is a vulnerability; investing in agent experience is a security strategy as much as a CX strategy.
When risks emerge, interaction analytics surface anomalies invisible to any supervisor — unusual data access patterns, conversational deviations, repeated unexplained account interactions — flagged in real time.
The best organizations combine intelligent monitoring with a culture where agents feel safe reporting coercion. Prevention, detection, and culture work hand in hand.