We have all had
this conversation before. You call a company with a real problem. Not a menu
problem. Not a “press 1 for billing” problem. A human problem. You explain it
carefully, patiently and you wait.

Then the voice
changes. 

You’re transferred,
or escalated, or “seamlessly handed off.” And suddenly you’re explaining
everything again from the beginning. Whatever you said before didn’t travel
with you. The system didn’t listen to you, it processed you. That
difference matters. And it explains why so much AI-powered customer service
feels impressive on the surface but deeply frustrating in practice. 

Customer service
has become one of the earliest and most aggressive use cases for artificial
intelligence. Chatbots greet us on websites. Voice assistants answer phones.
Some companies are even deploying AI “humans” that speak fluidly, pause
naturally and sound uncannily real. 

On paper, this
makes sense. Call centers are expensive and labor-intensive. And call centers
are highly measurable and metrics driven. On the surface, these characteristics
make them seem well suited for AI. Adding to the argument, most companies
consider customer service a cost on the balance sheet, not a differentiating
value-driving part of their business. If AI can reduce that cost and increase
efficiency, why wouldn’t companies adopt it?

 And yet, nearly
everyone agrees the AI service experience is getting worse. 

This isn’t because
AI is dumb. And it’s not because customers are unreasonable. The problem runs
much deeper than training data or better prompts. It’s a system architecture
problem.

 To understand why,
you have to go back to how digital systems were built in the first place.
Computers evolved to manage structured information. Rows and columns. Fields
and records. Names, account numbers, balances, timestamps. 

This wasn’t an
accident, but rather was an economic and technical necessity. Structured data
is cheaper to store, easier to process, faster to retrieve and far more
efficient to move across networks. Engineers and scientists found ways to
minimize the number of bytes required to store measurements. They invented
compression algorithms to lower the energy cost of wirelessly transmitting
data. They created standardized methods for structuring data and databases to
make retrieval and analysis easier.

The design bias
towards energy and storage minimization shaped everything that followed, from
customer relationship management systems and billing platforms to ticketing and
administration tools. Software systems and products optimized around discrete
facts that could be cleanly captured, standardized and reliably repeated.

So when you call a
company today, the system almost certainly knows who you are. It knows how many
times you’ve called before. It knows how long those calls lasted. It may even
know which products you own and which scripts were read to you last time.

What it doesn’t
know is how that experience felt. 

Real customer
support doesn’t live in fields and forms. It lives in context. Humans are not
good at remembering exactly what we said, but we are extremely good at
remembering what we meant. Computers are the opposite.

True customer
support  lives in whether your last
interaction left you calm or furious and what specifically triggered that
frustration. In whether your tone softened as the conversation progressed or
hardened. In the words you chose, the pauses you took, the moment when patience
turned into resignation.

Humans absorb this
kind of information effortlessly. We summarize it instinctively. We adjust our
approach in real time. But none of that fits neatly into a database column.
There is no standardized field for emotional residue. No dropdown menu for
“customer is technically satisfied but deeply annoyed.” And so most systems
simply ignore it.

There is
interesting research happening in how to improve the contextual detail stored
into knowledge graphs and in vector storage with semantic clustering. There are
efforts to instill computational memory decay that mimics how humans associate
change in meaning over time. But true human-like AI memory is still a research
project, not a deployable capability.

AI has run headlong
into the unstructured data wall. Emotion, tone, cadence, silence and nuance
all live in unstructured data – audio streams, transcripts, timing patterns and video. Brute force capturing that data is expensive. Storing it at scale is
expensive. And re-processing it across multiple interactions is even more
expensive. That expense is not just in dollars, but in energy.

At call-center
scale, retaining and repeatedly analyzing rich interaction data for millions of
customers quickly becomes a data center problem. And when faced with that
reality, most companies do what they’ve always done: they summarize, compress and discard.

The result is an AI
system that sounds attentive but remembers almost nothing that actually
matters.

 Nowhere is this
more obvious than when a call gets escalated. An AI system decides it can’t
resolve the issue and hands the conversation to a human. The human receives a
transcript, a ticket number, maybe a short summary. What they don’t receive is
the emotional context. Was there rising frustration, failed attempts, are there
landmines to avoid?

When humans hand
off conversations to one another, we do this naturally. “They’re upset because
they’ve been transferred three times.” “Be careful how you phrase this.”
“They’re calm now, but they weren’t earlier.” Machines struggle here, not
because they lack intelligence, but because the system was never designed to
carry that kind of meaning forward.

Memory, however, is
only half the problem. The other half is decision-making. Legacy enterprise
software was not built to decide. It was built to select.
Workflows and resolution paths are pre-coded. Decision trees were designed
years ago by people trying to anticipate the most common scenarios.

If your problem
fits one of those scenarios, the system works beautifully. But if it doesn’t,
everything grinds to a halt. Humans are remarkably good at handling one-off,
corner-case situations. We improvise and reason and we adapt when the rulebook
runs out. And I know I’m preaching to the choir when I remind that nearly any
time a problem is significant enough that someone actually calls a customer
service number today, it is because we’ve hit one of those corner-cases. It
really is impossible to make a pre-designed decision-tree that hits a majority
of the problems that customers encounter.

Machines, when
constrained by traditional architectures, are not able to creatively make
unique decisions. They can only choose from the options they were given. When
none apply, the system freezes because it has nowhere else to go. This is why
so many AI interactions feel confident right up until the moment they feel
helpless.

For decades,
computing has moved in the opposite direction of what AI now demands.

We have optimized
relentlessly for less memory, smaller packets and more compression. Systems
preferenced colder storage and lower energy per bit. All of it made sense in
the Internet Age, when moving information efficiently was the primary
challenge.

In the Data
Economy, if we expect machines to replace, or even meaningfully augment, human
interaction, they will need more memory, not less. And not just more storage,
but more kinds of memory. Contextual memory. Emotional memory.
Longitudinal memory that persists across interactions.

The human brain
doesn’t remember everything verbatim. It remembers meaning. Patterns. Emotional
weight. That’s the standard customers are unconsciously applying when they
interact with AI. And right now, AI falls short.

The reason is not
technical alone. It’s financial. Most AI deployments today are judged almost
entirely by cost reduction. How many human agents can be replaced by AI agents.
How fast calls can be handled. How cheaply volume can be processed. Early AI deployments
have over-favored improvement to the bottom line.

Far fewer companies
are asking whether AI improves the top line. Whether it builds trust. Whether
it increases loyalty. Whether it makes customers want to stay. In many cases,
businesses are saving money while quietly eroding goodwill.

AI customer service
isn’t failing because artificial intelligence is incapable. It’s failing
because we taught machines how to talk before we taught them how to remember,
and before we allowed them to truly decide. We forced intelligent systems to
operate inside architectures built for spreadsheets, not people. Systems
designed to process humans, not to listen to them. Until that changes, AI will
keep sounding smarter while feeling colder. And customers will keep repeating
themselves, wondering how a machine that can speak so fluently can forget so
completely.