Take on any CX challenge with Pipeline+ Subscribe today.

Automation, Refunds, and Rights to a Human

Automation, Refunds, and Rights to a Human

Automation, Refunds, and Rights to a Human

The wide implications of California’s new laws.

When a customer’s dinner never arrives, most people don’t want app credits, maze-like navigation menus, or a chatbot loop; they want their money back and a human who can fix the problem.

California’s new Assembly Bill 578 makes that expectation a legal requirement for food delivery platforms: full cash refunds back to the original payment methods and access to human customer service representatives when automation can’t resolve the issues.

It’s a narrow law on paper, but it’s also one of the clearest U.S. signals yet that regulators are ready to step into automation-first customer journeys.

What AB 578 Really Does

California’s AB 578, which took effect on January 1, 2026, requires food delivery platforms that operate there to refund customers for orders that are not delivered or are delivered incorrectly. They must also return that money - including taxes, fees, and tips - to the original form of payment rather than issuing app-only credits.

Platforms can deny a refund only if they can show the customer is responsible or if fraud evidence exists. The law also protects couriers by prohibiting platforms from clawing back refunded gratuities from drivers.

For digital-first platforms, AB 578 is a warning shot against “IVR lock-in” and chatbot traps.

Contact leaders will recognize the transparency and service obligations here. Delivery apps must provide itemized breakdowns of each transaction and, crucially, must offer access to a human customer service representative when a customer’s problem cannot be resolved through automated systems.

In other words, California is not banning automation; it’s codifying that automation cannot be the only path when there is a live dispute about money, service, or responsibility.

Even if you never touch food delivery, this is a big deal. AB 578 is a concrete statutory example of a pattern we’re starting to see across channels and sectors (also see Figure 1).

Namely, automation is acceptable, even expected, but only if customers can see what’s happening, understand their rights, and can reach a human when the stakes are high.

A Broader “Automation Plus Human Fallback” Trend

California has already sent other signals in this direction. SB 243, the state’s new “companion chatbot” law, targets AI systems that provide human-like, emotionally supportive interactions. It requires:

  • Clear disclosure that the user is talking to a chatbot.
  • Safety protocols around self-harm and sexual content.
  • Additional protections for minors.

The common thread with AB 578? Discomfort with “silent automation”: systems that look and feel human but aren’t, and which may not have obvious escape hatches when something goes wrong.

At the federal level, the proposed “Keep Call Centers in America Act of 2025” (introduced as S. 2495 in the Senate and H.R. 4954 in the House) pushes the same themes into the broader contact center and outsourcing space.

  • Businesses handling customer service would have to disclose the physical locations of their agents at the start of interactions. If the agents are overseas, they must inform customers of their right to transfer to U.S.-based human agents.
  • For AI or automated systems, companies would have to clearly disclose that automation is being used and offer a transfer to a human agent upon request.

Add in new and pending chatbot transparency laws in states like New York, along with AI and customer experience (CX) bills in jurisdictions such as Maine, Utah, Nevada, and Illinois, and a pattern emerges (also see Figure 2).

That is this: regulators are not trying to freeze customer service in the past, but they do want three things - disclosure, agency, and human escalation - for customers navigating automated experiences. Automation is fine as long as it is transparent and always comes with a real path back to a human.

If you run outbound programs, this feels familiar. TCPA and FCC rules already constrain automated voice and AI-assisted calling, requiring consent for many types of calls and texts and giving consumers clear opt-out rights.

The same underlying values are now showing up on the inbound and service side. That customers should know when automation is involved, should have a say in how far it goes, and should be able to reach a human when the interaction affects their money, safety, or legal position.

Implications for Digital-First, Automation-Heavy CX

For digital-first platforms, AB 578 is a warning shot against “IVR lock-in” and chatbot traps.

If your business model relies heavily on self-service flows, you now need to ask hard questions about where those flows can safely stop and where the law, or simply customer expectation, will demand a human.

Food delivery is the first category explicitly singled out in California. But it’s not hard to imagine similar rules extending to travel cancellations, subscription renewals, insurance claims, or recurring billing disputes.

From a design perspective, that means mapping your automated journeys with the same rigor you apply to compliance controls.

So, you need to ask yourself, and have answers for, “Where are my customers most likely to contest charges, allege fraud, or raise issues that could escalate to regulators or social media?”

Those points should have prominent, documented pathways to human agents, not just buried “contact us” options.

Outbound teams are affected too. When a refund or complaint triggers follow up calls or messages - think collection of negative balances, outreach about disputed transactions, or make good offers - the same customer who just battled your automated gauntlet may be less tolerant of robocall-style outreach.

The safest posture is to treat outbound and inbound as a unified CX and compliance surface. Disclosures, consent, human access, and record keeping should be harmonized rather than siloed, and handled by separate teams with different thresholds.

Why “Highest Common Denominator” Is Safer

Brands operating across multiple states and countries wrestle a messy patchwork of rules around refunds, chatbot transparency, agent location, and escalation rights.

A California-only AB 578 workflow, a different one for New York’s chatbot rules, and yet another for Canadian and also for European jurisdictions with their own consumer protections, language, and privacy laws might look efficient on paper, but it becomes fragile in practice.

Agents could get confused, including not knowing where the customer is located. There is also the risk of documentation fractures. And proving compliance in a cross border complaint or class action gets harder, not easier.

...brands that build around transparency, consent, and human fallback...will be in a far better position than those that cling to opaque, automation-only models.

An alternative play is to treat AB 578 and its peers as a preview of where the floor is headed and build a higher internal standard that can travel. That might mean:

  • Adopting clear, consistent bot and AI disclosures everywhere, mandated or not.
  • Making a human escalation path obvious in every high stakes flow, regardless of state (or country).
  • Defaulting refunds to the original card when you’re at fault with narrow, evidence-based exceptions.
  • Logging automated interactions and escalation decisions so that Governance, Risk and Compliance (GRC) and Legal can actually find what they need.

Viewed through a GRC lens, these are not just user experience (UX) preferences; they are controls. They define how the organization treats consumer harm, complaint handling, and regulatory exposure in real time.

A Practical Playbook for Contact Centers

Here’s what leaders can do right now:

  1. Inventory automation. Map where bots, IVRs, and automated emails or texts are making decisions about money, access, or legal outcomes. Prioritize flows that deny refunds, close tickets, or impose fees.
  2. Nail down “human required” scenarios. Use AB 578 as a template: non-delivery, botched service, fraud claims, security events, and any situation that reasonably implicates consumer harm should have guaranteed human handling.
  3. Build in disclosure and easy exit. Make it explicit when a customer is interacting with AI or automation (spell out “you’re talking to AI” upfront) and make the human button impossible to miss: especially after one or two failed automated attempts.
  4. Align outbound and inbound rules. Ensure the same consent, disclosure, and escalation standards apply to both inbound and outbound when you’re calling or texting customers about the outcomes of those disputes.
  5. Make it GRC, not a one time UX tweak. Monitor where automation breaks, track complaints about “I can’t reach a human,” and feed that data back into both product design and compliance oversight.

Where Legislation Is Headed

AB 578 won’t be the last word on how refunds are processed or when humans must step in. It is, however, an unusually clear example of legislators codifying what many consumers already assume: automation may be the front door, but it cannot be the only door.

As more states experiment with refund and CX rules - and as federal lawmakers probe AI and offshoring in contact centers - brands that build around transparency, consent, and human fallback now will be in a far better position than those that cling to opaque, automation-only models.

For contact center leaders, that’s not just a compliance story. It’s an opportunity to get ahead: craft outreach and service that lean into automation’s speed while honoring AB 578’s core truth: when something goes sideways - especially with money - the consumer deserves a real person who can fix it.

Melody Morehouse

Melody Morehouse

Melody Morehouse, MBA directs Regulatory Compliance at Gryphon AI, driving federal & state regulatory intelligence including TCPA/TSR/FDCPA/CAN-SPAM, plus contact strategy. With deep expertise in telecommunications, consumer privacy, and marketing, she turns complex regulations into practical controls for compliant outbound reach.

Contact author

x

CURRENT ISSUE: May 2026

Can Security, Compliance, and Excellent CX Co-Exist?

View Digital Issue

SUBSCRIBE SUBSCRIBE

Most Read

Artificial Intelligence

The Human Touch Paradox

Artificial Intelligence

The AI Assistant-App Face-Off

AIMR Forrester Mind the Gap
Verint 300x250 20250116
NiCE Elevate Ai Accurate
OPEX Forrester Humans in the Loop
WebEx 300x250
OPEX HBR Webinar Replay