The Case for Human Judgment in AI-Powered Customer Experience

Customer service leaders have spent years chasing automation as the answer to scale and cost pressures. Despite rapid advances in AI, the customer experience data tells a different story, one that shows where technology stops and human judgment must step in.

Authenticx, a conversation intelligence firm that studies how people speak to healthcare organisations, has analysed more than half a billion customer interactions this year alone. Its findings suggest that automation often increases frustration rather than reducing it, especially in industries where emotion and risk are high.

Amy Brown, CEO of Authenticx, begins: “Customers have a higher degree of negative sentiment when they’re talking to bots than when they’re talking to humans. They don’t feel like they’re getting the nuanced support that’s specific to their situation.”

The Limits of Automation

Brown’s observations come largely from the healthcare sector, where Authenticx’s clients face some of the most sensitive and high-stakes interactions. “These are very vulnerable situations in which people are desperate for help. When they feel the AI bot isn’t understanding their reality, they get even more frustrated,” she says.

That frustration often stems from how chatbots are introduced and how rarely it’s done well.

“Clients have deployed AI bots without properly training them and assessing what things would be appropriate for a bot to reply to versus a human. It’s not about building a better bot but being intentional with design,” Brown explains.

Before automation begins, she argues, companies should take time to listen. “Do more research and development before deploying a bot. Listen to the conversations already happening between customers and agents, and build bot capabilities around those real-life scenarios.”

When to Bring Humans Back In

One of the most telling signals in Authenticx’s data is how customers behave when they’re stuck. “Some companies work very hard to not have a human brought into the conversation,” Brown adds. “You see customers asking to speak to a person, staying in this endless loop of frustration because the bot isn’t transferring them.”

Brown notes the solution is structural as the customer journeys must be designed with “exit ramps,” which allow customers to speak to a human at any given point. She says:

“Having those exit ramps from a bot to a human is super important. In healthcare, when people are making an inbound call, it’s because they’re already stuck in a problem. If the bot hasn’t been trained to solve that problem, they hit a roadblock quickly.”

Companies that classify interactions by risk — deciding which tasks can safely be automated and which require human discretion — are the ones most likely to maintain trust. “If there’s a compliance issue, or a customer in a mental health crisis, that’s when you want a human in the loop,” she says.

“When the need is purely administrative, that’s where AI can perform reliably.”

The Real Cost Equation

For all the talk about automation savings, Brown points out that fully replacing humans with automation rarely delivers the intended outcome.

“When trained well, which takes a lot of time and investment, bots can replace some interactions,” she says. “But you have to have humans in the loop making sure the bots are doing the right thing.”

Those oversight roles, from data scientists to quality managers, create new costs that offset headcount reductions. “It’s less of a cost-savings opportunity and more of a chance to elevate the human workforce,” Brown says. “Let bots do what they do predictably well, and let people focus on higher-value work.”

That recalibration of value, she believes, is part of a broader reckoning. “There was this belief that bots would replace human beings and save a lot of money. In reality, it’s not as easy as we once thought. There is no short, easy path to profitability,” she says.

Teaching AI Empathy

Authenticx’s models are built by analysing human-to-human conversations to identify patterns that signal empathy and problem-solving. “Our models are trained by studying what empathy looks like, what professionalism sounds like, what effective problem resolution feels like,” Brown explains. “Once you understand that, you can build a rule set for AI models to follow.”

Still, she stresses that AI will never fully anticipate the next scenario. “New situations can come up at any point, like market shifts and global events, and a bot may not have been trained for them,” she says. “That’s why it’s so important to always have human oversight. You can do a lot of innovative things with AI, but you should never let it run without thoughtful intervention.”

Designing a Smarter Partnership

If customer service were to be redesigned for today’s AI-driven environment, Brown says it should start with reality rather than aspiration.

“You can’t build a positive customer experience if you don’t start with the truth of what’s happening,” she says. “Listen to real-world experiences, then make thoughtful decisions about which interactions are best served by AI and which need humans.”

The relationship between AI and humans, she argues, shouldn’t be competitive but collaborative. “I think of it as a partnership. There’s something in the middle — a continuum where humans and AI work together based on risk and reward for the customer,” she adds.

That partnership may be the real opportunity to engineer systems that know when empathy is required. “It’s not very popular to slow down and be intentional. But that’s what’s needed if we’re going to use the technology in a way that doesn’t backfire,” Brown concludes.