In December 2025, the Federal Trade Commission (FTC) fined Instacart $60 million for trapping customers in automated loops with no way out. A few weeks later, California’s AB 578 went into effect, requiring food delivery platforms to provide access to a human when automation fails.
This isn’t a food delivery story. It’s a contact center story.
The complaint data that drove that legislation exists in our industry too. The regulatory attention landed on food delivery first because that’s where the consumer evidence was loudest. It’s moving.
The movement is gaining international momentum, with Spain recently passing legislation requiring large companies to answer customer service calls within three minutes and prohibiting the exclusive use of automated systems.
And the operational gaps regulators are targeting – broken escalation paths, missing context, automation that doesn’t know when it’s failing – are the same ones contact center leaders have been quietly managing around for years.
What to Fix?
So, let’s talk about what to actually fix.
1. Escalation paths
This is simpler than you think but is messier than you’d expect.
AB 578’s requirement is straightforward: when automation can’t resolve a request, a human must be available. That’s it.
The question is whether your operation actually clears that bar. Not in theory but in practice.
- How many steps does it take a customer to reach a human when the bot fails?
- Is that path visible to them or do they have to fight for it?
Run the flow yourself. If it takes more than two steps from failure to human, you have a gap. If the path isn’t clearly surfaced, surface it.
This isn’t a complex fix. It’s a design choice that was made incorrectly and hasn’t been revisited.
2. Context transfers
This is where the real damage happens.
The escalation path is table stakes. Context transfer is where most contact centers lose the most ground, and where the next wave of regulation is pointing.
California’s AB 1018, which is currently working through the legislature, would require organizations to retain records of automated decisions for the life of the system plus five years. Fragmented handoffs stop being just a service problem under that standard. Instead, they become a record’s liability.
But forget the compliance framing for a second. When a customer moves from a bot to an agent and the conversation history doesn’t follow them, the agent starts cold. The customer has to repeat everything. That interaction – the one that was already failing before it reached a human – now has to rebuild trust from scratch.
One in three customers leaves a brand after a single bad experience. Most don’t complain first. They just leave. That’s the cost of starting over.
Getting context transfer right is good operations. It’s also increasingly the floor that regulators are moving toward. When those two things point in the same direction, it’s worth paying attention.
3. Automation failures
The harder question is this: does your automation know when it’s failing?
Most IVR and virtual agent systems are built to contain volume. That’s a legitimate goal. But containing volume and recognizing failure are different design objectives, and most systems optimize hard for the first one without building much capacity for the second.
I’ve seen this pattern a lot. A system that routes customers in circles – offering options that don’t resolve anything, re-presenting the same menu, suggesting self-service that doesn’t apply – without ever triggering an escalation.
It’s doing its job on paper. Containment rates look fine. Meanwhile, the customer has been in the IVR for nine minutes and is about to churn.
Under AB 1018’s proposed requirement for plain-language explanation of automated decision-making in real time, that design becomes a compliance exposure. But honestly, it’s a problem worth fixing before any regulator asks about it.
The test I’d use: could you explain to a customer, on that call, what just happened and why the system responded the way it did? If the answer is no – if the routing logic is opaque even internally – that’s your gap.
What This Means For Your Agents
Here’s the part that gets lost in compliance conversations: there’s an upside to this moment.
As automation absorbs more routine volume, the interactions reaching human agents are getting harder.
They are more complex, more emotionally charged, and more consequential. These are the moments that determine whether a customer stays.
Research consistently shows customers will pay more for a better experience. But what they’re really paying for in those high-stakes moments is the feeling that someone already understands their situation.
Context transfer is where most contact centers lose the most ground, and where the next wave of regulation is pointing.
That requires agents to have full context when they pick up. What the bot said. What the customer tried. Where things broke down. An agent who starts with that context can solve the problem. But an agent who starts cold has to re-litigate it first.
We in the contact center industry have a responsibility here. Not just to meet a regulatory bar, but to give agents a real shot at doing their job well when it matters most.
Three Things to Audit Now
Before the next bill lands, here’s where to start.
1. Escalation paths
Map the actual steps from automation failure to the human agents. More than two? Simplify. Not clearly surfaced to the customers? Fix that.
2. Context transfers
Confirm that the conversation histories, account context, and bot interaction data follow the customers when they escalate. If the agents are starting cold, fix the handoffs.
3. Automation failure recognition
Review whether your virtual agents or IVR has defined escalation triggers: which are the specific conditions that route interactions to humans instead of continuing to loop. If it doesn’t, build them in.
None of this requires waiting for legislation. The customer stuck in your IVR for nine minutes was always a problem worth solving. The regulatory environment is just making the cost of not solving it more explicit.