CX Leaders Advance 2026 · Session Report

Navigating the
GenAI Paradox

Balancing Time-to-Value with Trust & Compliance

Every CX leader right now is caught between two forces: the board pushing for GenAI yesterday, and the compliance team asking what happens when it goes wrong. Moving fast enough to matter, carefully enough to stay out of the headlines.

This tension is the GenAI Paradox.

The framework cuts through it: a scoring matrix that gives your team a shared language to move fast where it's safe, and stop where it isn't.

Akos Tolnai, CCXP

60-minute breakout  ·  CX Leaders Advance 2026  ·  Toronto, CA  ·  April 28–29

Based on original consumer research: 104 responses · Canada & US · Feb–Mar 2026

Scroll to read the full story

The Triage Canvas and Case Cards from this session are free to download until May 31.

Get the free toolkit
Act 1 · The Pressure Is Real

The Race
Is On.

The pressure to deploy GenAI is no longer a suggestion. It's a mandate. But before we get to the framework, let me tell you what the data said. Because it changed the story entirely.

Your customers already
live in an AI world.

The world your customers live in has fundamentally changed. They carry powerful AI in their pocket. They use it every day to summarise documents, answer complex questions, plan travel in seconds. And they're starting to wonder why your support channel doesn't feel anything like it.

Going into this research, we assumed that customers who use AI daily had been rewired to accept a trade-off: a good-enough, fast answer over a perfect one. Speed was winning the preference war. Or so we thought.

Hypothesis: Daily AI use had rewired customers to prefer speed. "80% right, right now" would beat "100% right, eventually."

Our Hypothesis Going In
Speed
"Good enough"
WINNING ↑
🎯
Accuracy
"Perfect, but wait"

Daily AI use had rewired customers to accept
"80% good, right now." Or so we assumed.

Your CEO isn't asking.
They're telling.

Your CEO, your board, your competitors: everyone is telling you to implement GenAI, and to do it yesterday. The conversation has moved off the technology agenda and onto the strategy agenda, and it comes with a deadline.

Every board deck has a GenAI slide. Every analyst report says your competitors are moving. And the message to leaders is clear: figure it out fast, and don't end up in the news in the process.

The pressure to move fast and the need to move carefully are not sequential problems. They arrive in your inbox at the same time. That is the paradox.

🚀
The Board: Move Now
Competitors are shipping. Investors are watching. Customer expectations are already set by consumer AI. Delay is a strategic choice with strategic consequences.
vs
🛡️
Compliance: Not So Fast
Hallucinations are real. Regulatory exposure is real. In finance, healthcare, or any regulated vertical, a wrong AI answer isn't a service failure. It's a liability event.

What do your customers
actually want?

To build a framework worth trusting, it needs to be anchored in real consumer data. Before presenting the matrix, I ran an independent survey across Canada and the United States: 104 responses, weighted toward educated, professionally active adults. The kind of customers many of you serve.

One important framing point: this data tells you what consumers want. That is a critical input for scoring risk honestly. But what works for consumers in general does not override your industry's compliance reality. A financial services firm and a quick-service restaurant face entirely different accountability structures. The scoring framework stays the same. The data helps you calibrate where your customers actually sit on the tolerance scale.

Survey sample: n=104 · Canada 54%, US 37% · Ages 35–54: 73% · Bachelor's or higher: 85% · AI Adopters (daily/weekly): 72% of sample

The data pushed back.
Hard.

We asked: "When you need support from a company, which matters more: speed with a good-enough answer, or accuracy even if you have to wait?" We expected the AI-savvy crowd to choose speed. They didn't.

GroupAccuracySpeedDon't Know
All respondents (n=104)78%19%3%
AI Adopters (daily/weekly, n=75)72%27%1%
Traditional users (rarely/never, n=27)93%0%7%

Before you read this as "people are fine waiting longer for a correct answer," consider who answered. 72% of the people who chose accuracy use AI daily. They have already experienced fast AND accurate via the tools in their pocket. They are not choosing accuracy over speed. They are refusing to accept that the trade-off is still real. The bar hasn't been lowered. It's been raised.

0%
chose Accuracy, even the most AI-savvy respondents in the sample
0%
of Traditional Users chose Speed.
Not a single person.
Act 2 · The Real Paradox

The bar hasn't
been lowered.
It's been raised.

Customers don't want fast or accurate. They've experienced what truly good AI feels like, and now they want both, simultaneously. The old forced choice is gone forever.

Fast AND accurate.
Not one or the other.

For decades, companies faced a genuine constraint: you could either deploy a fast automated system (scripted menus, IVR trees, keyword bots, available 24/7 but essentially dumb) or route customers to a trained human agent (slow, expensive, but accurate). There was no third option. This was the forced choice. Everyone accepted it as structural.

GenAI changed the constraint. For the first time, a system can be genuinely fast and genuinely intelligent. The forced choice is technically no longer necessary. But here is the problem: your customers already know this, because they have ChatGPT in their pocket. They've experienced the third option. And now they expect it from you.

The "dumb bot" isn't just unhelpful anymore. In a world where a thoughtful, contextual answer takes two seconds, a menu of scripted options is an insult to your customer's intelligence. 83% of our respondents confirmed exactly that.

0%
frustrated or infuriated
by scripted chatbots
2%
find them efficient
and easy

"How do you feel about support chatbots that can only offer a menu of options?" (n=104)

Infuriating, a waste of time
46%
Frustrating, I look for a workaround
37%
Acceptable for simple problems
13%
Efficient and easy
2%
0%

of AI Adopters already use personal AI to research their problem before contacting support.

They are not waiting for your bot. They are bypassing it entirely. They arrive at your support channel already armed with context, already expecting a human-grade answer.

"I am never interacting with an automated system on purpose. Every time this happens it's because I have exhausted every other option. Every second with a chatbot is a waste of my time."

Survey respondent  ·  Rarely uses AI  ·  Age 25–34  ·  United States

Notice who said this: someone who rarely uses AI themselves. Not a power user, not an early adopter. Someone who has already tried the FAQ, tried Google, and is asking you for help as a last resort. A menu of scripted options is a door slammed in their face.

Score your own GenAI initiatives with the framework. Download the canvas and case cards - free until May 31.

Get the free toolkit
Act 3 · The Risk of Getting It Wrong

"How do we do this
without ending up
on the news?"

⚠️
Loss of Control
Tap to see a real case
+
Air Canada · 2024

Air Canada's support chatbot told a grieving customer that a bereavement fare discount applied to already-purchased tickets. The policy didn't exist. The customer relied on it. Air Canada argued in court that it wasn't responsible for what its chatbot said. The court disagreed. Air Canada was held fully liable. First ruling of its kind: you own what your bot says.

↗ Read the CBC report
📰
Brand & Reputation Damage
Tap to see a real case
+
DPD · United Kingdom · 2024

A customer convinced DPD's branded AI support agent to write a poem criticising the company and to describe itself as useless. Screenshots went viral across social media. The bot said it on DPD's own platform, in DPD's support channel, under DPD's name. No warning. No override. The company had no control over what it said once the conversation went off-script.

↗ Read The Register report
⚖️
Regulatory & Legal Exposure
Tap to see a real case
+
New York City MyCity AI · 2024

New York City launched an official AI assistant for businesses, built on city-provided information. The chatbot told small business owners to take actions that violated local employment law. Not vendor error, not a third-party tool: the City's own official AI, giving legally incorrect guidance to the people it was built to help. The city had to publicly retract the guidance and suspend the service.

↗ Read The Markup report

Are you accountable for
what your AI tells
your customers?

We know we have to act. But here's what every leader quietly wrestles with. GenAI can hallucinate. It can go off-brand. In a regulated industry, a wrong answer from your AI support agent isn't just a bad customer experience. It's a liability your legal team will be managing for months.

The hardest part? Your CEO is asking you to be accountable for the outputs of a technology that not even the people who built it can fully explain. That's not a technology problem. It's a governance problem. And it lands in your inbox.

Our survey gave us a precise picture of where trust breaks down with customers, and the data matters for anyone building an internal business case.

Customers who love AI personally apply a much stricter standard when a company is handling their problem. Personal AI use does not equal institutional trust in AI.

"Would you use a new AI support agent that might occasionally give incorrect information?"

0%
Yes,
willing to try
0%
No,
want a human
0%
Don't
know

Among those who said Yes: why?

"Curious to see how it performs"
41%
"I can spot a bad answer"
22%
"Waiting is worse than the risk"
12%
"Speed is my top priority"
7%

41% said curiosity. Not speed, not trust. That's not adoption. That's a trial run. They're evaluating you. One bad answer doesn't just end the session. It ends the relationship.

Act 4 · The Navigator's Framework

A tool to decide
what to build,
and what to kill.

The CX GenAI Prioritization Matrix. Two axes: potential CX impact, and customer-centric risk. Score before you plot. Plot before you build.

Score the risk honestly.
Don't guess it.

Our survey shows the risk is almost always higher than CX leaders assume. Even customers who love AI personally apply a much stricter standard when a company is handling their problem. Before any idea lands on the matrix, run it through this three-factor scorecard.

Risk Scorecard · 3 Factors
01
What is the customer's emotional state?
High-stakes issue (billing, health, travel disruption) vs. low-stakes (FAQ, order status). Emotionally elevated customers need human-grade accuracy.
02
What is your audience's AI tolerance?
Our data: 29% of even daily AI users would flatly refuse an imperfect AI agent. Know your segment before assuming openness to automation.
03
What is your compliance burden?
Finance, healthcare, government = high scrutiny. A wrong AI answer in a regulated context isn't a service failure. It's a legal one.

Why the scorecard comes first

The instinct is to plot ideas directly on the matrix. Resist it. The scorecard forces an honest conversation with your team and often surfaces assumptions you didn't know you were making.

Our data showed 29% of the most AI-savvy customers would still refuse an imperfect AI agent outright. In most enterprise customer bases, that number is likely higher. You need to know your number before you ship anything.

The scorecard also gives you language for the conversation with legal, compliance, and executive stakeholders: not "we want to build an AI agent," but "we've scored this against three risk dimensions, and here is where it lands."

Plot & Decide

Once you've scored the risk, every GenAI initiative lands somewhere on this grid. The quadrant tells you the right posture and gives you a shared language to prioritise, sequence, and kill ideas.

CX Impact →

🟢 Quick Wins

Just Do It
High impact, low risk. Build momentum and prove ROI here first.
e.g. AI summarising case notes for agents

🟡 Strategic Bets

Plan Carefully
High impact, high risk. Needs executive air cover, pilots, and guardrails.
e.g. Fully autonomous customer journey agent

⬜ Incremental Gains

Automate & Delegate
Low impact, low risk. Good for ops efficiency. Don't oversell internally.
e.g. AI-powered ticket categorisation

🔴 Danger Zone

Avoid At All Costs
Low impact, high risk. The hardest no to say, but the most important one.
e.g. GenAI writing public apologies for outages
← Low Risk Audience Tolerance & Compliance Risk High Risk →

The Danger Zone is where the hardest conversations happen. High risk, low reward, but often where the most exciting-sounding ideas land. This framework lets you say no with data, not just instinct.

Plot your own initiatives on the canvas. Download the free workshop toolkit.

Get the free toolkit
Act 5 · Strategy & Call to Action

Three tactical plays.
One path forward.

Not every organisation can move at the same speed. These plays let you sequence your GenAI adoption from lowest risk to highest, building confidence and evidence at each step before committing to the next.

Match the play
to your position.

Where you start depends on your industry, your compliance burden, and your leadership's appetite for risk. These three plays are not a menu to pick from. They are a progression. Most organisations should start at Play 01 regardless of ambition, because it builds the internal evidence base that makes Play 02 and 03 defensible.

01 Quick Win
Internal First: Start where the customer can't see the AI.
Deploy AI between your systems and your agents. Keep humans as the customer interface.
AI summarises call history so agents don't spend three minutes reading before responding. AI suggests the next-best action based on the case context. AI categorises and routes tickets before any human reads them. If the model gets something wrong, a trained human catches it before it reaches the customer. Zero customer-facing compliance exposure. Full productivity gain. This is where you prove ROI internally and build the confidence to go further.
What it unlocks
AI-assisted agent response drafting and tone guidance
Automated case summarisation at the start of every call
Smart ticket routing and priority scoring
Real-time knowledge surfacing during live conversations
02 Strategic Bet
Opt-In Sandbox: Test with customers who volunteer to test.
A defined pilot group, explicit consent, and clear terms. A Strategic Bet with a defined risk perimeter.
Select a group of customers who explicitly opt in under clear terms. Frame it directly: "This feature uses AI and may occasionally be imperfect. You can always ask for a human agent." The terms and conditions define the experiment, set expectations, and limit liability. You get real-world performance data before building for everyone. They get early access and feel like partners, not test subjects. Our survey found 55% of AI Adopters willing to try an imperfect agent, with 41% driven by curiosity. That is a significant opt-in pool. Use it before you build for the whole customer base.
What it unlocks
Real performance data that no lab test can replicate
Defined liability boundaries through explicit T&Cs
A clear path from pilot to production, with evidence
Customer insight that makes your board case credible
03 Long Game
Structured Content: Become the source every AI reaches for.
Design your knowledge base to be machine-readable and agent-accessible. Outsource the inference risk to the platform the customer already trusts.
94% of AI Adopters already use personal AI to research their problem before contacting support. They are doing this whether your content is ready for it or not. The strategic move is to design your help articles, product policies, and FAQs to be clean, accurate, and accessible via structured sitemap or API. When a customer's AI retrieves information about your product, it finds your authoritative version. The AI inference happens entirely on a third-party platform. Your liability is limited to the accuracy of what you publish. You are not the AI operator. You are the knowledge publisher. And if your content is better structured than your competitors', you are the default source for any AI that covers your category.
What it unlocks
Content becomes a scalable self-service layer at no model risk
Compliance exposure stays at source quality, not model inference
Positions you as the authoritative AI reference in your category
Returns compound as AI adoption grows across your customer base

These plays are not mutually exclusive. Most mature programmes run all three simultaneously: agents are using Play 01 today, a pilot cohort is running Play 02, and the content team is executing Play 03 in the background. The sequencing is about where you start, not where you stop.

Ask This on Monday Morning
"What is our official triage model for GenAI in our customer journey?"
If no one can answer it, you now have the framework to lead that conversation.
1
The bar is higher than you thought
78% of consumers still prioritise accuracy. Customers want accurate and fast, simultaneously. Only GenAI can credibly promise both. That's exactly why this matters so much, and why the stakes are so high if you get it wrong.
2
Willing adopters are on a trial run
41% said curiosity. Not speed, not trust. They're evaluating you. Score the risk before you build anything customer-facing. One bad answer doesn't just end a session. It ends the relationship, and probably starts a review.

Navigate the
Paradox.

Scan to connect on LinkedIn
Akos Tolnai, CCXP
linkedin.com/in/akostolnai
Free download · Until May 31, 2026
Get the Workshop Toolkit
The Triage Canvas and Case Cards from the CXLA 2026 session.
Enter your name and email to download.