Most people don’t trust the companies they buy from. AI seems to be making customers more wary, instead of building better experiences and trust as promised.
According to the Twilio “2025 State of Customer Engagement Report,” only 15% of global consumers say they fully trust brands with their personal data. And 61% openly question whether businesses act in their best interests at all.
This erosion in trust isn’t happening in a vacuum. In a time marked by inflation, economic anxiety, layoffs, and cost-of-living pressures, customers are more attuned to how brands treat them. They are less tolerant of friction, inconsistency, or unclear data practices. When wallets tighten, expectations rise.
For an industry obsessed with signals and sentiment scores, those numbers should land hard. Because trust is no longer a marketing message, it’s a core performance metric. If your customers don’t trust you, your AI isn’t helping. It’s just accelerating confusion, inconsistency, and doubt.
The Credibility Crisis
Over the last decade or so, personalization has become the go-to strategy for brands looking to stand out. But we’ve hit a ceiling. Most consumers still don’t feel understood, and the few that do often aren’t sure if it’s helpful or creepy.
While 97% of businesses plan to increase AI investments in the next five years, less than half of consumers believe brands are transparent about how they use them. Only 34% of consumers say they’re aware when their data is being used to train AI systems.
This has caused trust to stagnate — and in some markets surveyed across 18 countries, including the U.S., United Kingdom, France, Germany, Japan, Australia, and Brazil — even declined.
Customer loyalty indicators like repeat purchases and brand recommendations are also slipping in key markets, with only 44% of consumers describing themselves as “very” or “extremely” loyal, down from 48% last year.
The gap between expectation and experience is widening, putting pressure on customer experience (CX) teams to not just perform, but reassure.
This is not just a confidence gap; it’s a credibility crisis. And it’s happening right where trust should be built: in the contact center, in the mobile experience, and in every digital interaction that makes or breaks a customer relationship.
Trust Is Not Universal
Even when you’re transparent - like telling customers they’re speaking to an AI (see BOX) - what builds trust in one region can undermine it in another.
Why Open the AI Black Box?
You call a company to return a purchase or rebook after a canceled flight, only to be trapped in an endless maze of AI voice prompts. You’re not sure if you’re talking to a person or a bot. You’re transferred, asked to repeat yourself, pressing zero does nothing, and after hours on hold the call drops. We’ve all been there.
Now imagine that same interaction, but the brand knows who you are and what you’re likely calling about. There’s no need to repeat yourself. They clearly tell you upfront you’re speaking to an AI assistant trained to resolve your issue, with the option to switch to a human at any point.
You may still not love the chatbot, but at least you know the rules of engagement. That’s transparency. This is why it’s important to open the AI “black box” and reveal it. It matters now more than ever in order to capture revenue from customers, but more importantly, to build their trust and loyalty.
One of the trickier truths for CX leaders to navigate is that trust looks different around the world. In markets like Latin America and the Asia-Pacific region, consumers are generally more open to sharing personal data, especially if it leads to better, more relevant experiences. But in North America and parts of Europe, skepticism runs deeper.
Beyond attitudes toward data-sharing, we’re also seeing divergent trends in brand loyalty and service expectations.
For example, consumers in the U.S. and U.K. are quicker to switch providers over trust issues. While those in countries like India, Brazil, and Australia show higher tolerance – but also higher expectations - for speed and personalization.
In a time marked by inflation, economic anxiety, layoffs, and cost-of-living pressures, customers are more attuned to how brands treat them.
Even within generations, the data splits. Gen Z, for instance, is far more protective of their location and social data than older groups. Yet this same cohort expects brands to understand them on a near-individual level.
The message is clear. Trust isn’t given; instead it is earned interaction by interaction and it’s context-dependent.
The same goes for communication preferences. Some markets prefer chat, others text. Some want rich media like video or voice; others still lean on email.
But regardless of format, one theme is consistent. Customers want to feel in control of their experience: not like they’re being analyzed behind the scenes without their input.
AI Needs Context
Let’s be clear: this is not an anti-AI argument. Used wisely, AI can be the single most powerful way to improve service, boost responsiveness, and unlock real-time personalization that customers actually value. But that only works when it’s paired with the right operational foundations and data.
Think of AI like a fast-moving train. If your data governance, customer consent models, and human fallback options aren’t in place, you’re speeding toward a cliff. But with the right context - clear disclosures, explainable decisions, opt-in settings - you’re not just moving faster. You’re moving smarter.
Some brands have already figured it out. One major global retail banking company recently integrated AI-driven segmentation across its contact center and marketing systems. But before rolling it out, it launched an education campaign with customers explaining how their preferences would be used: and offered clear settings to opt in or out.
The result wasn’t just stronger engagement, it was higher trust scores across key satisfaction surveys, proving that clear communication can be as impactful as the technology itself.
Five Ways to Make Transparency a Practice
If you’re wondering where to begin, here’s where leading CX teams are focusing.
- Make it clear to customers when AI is being used. Don’t leave it to customers to guess. Start interactions with clarity and purpose.
- Explain how data is used. Be proactive in disclosing what data you collect, why it matters, and how it benefits the customer.
- Give users control. Since preferences vary based on region, age, and more, allow customers to choose preferences, opt out, or adjust how their data is used. This isn’t just ethical: it’s what 84% of them want.
- Prioritize consistent experiences. If your chatbot behaves one way and your agent another, that inconsistency breeds doubt.
- Make it easy to reach a human. Automation should never be a dead end. Build bridges, not walls.
Transparency earns permission. It earns attention. And it earns loyalty. So, as you continue scaling and automating, don’t forget the basics: explain what you’re doing, show why it matters, and give customers the confidence to believe you’re on their side.
Because in the end, no technology builds trust on its own. People do.