John Encizo is the field CTO and director of enterprise AI for North America at Lenovo. Photo courtesy of © John Encizo

In a hospital, the rules are simple and unforgiving. Data that gets compromised can hurt someone. Systems that go down can kill someone. 

Rule number one is do no harm, and that rule doesn’t stop at the patient.

John Encizo started his career doing desktop support at a Tennessee hospital during the Y2K years. He learned early that when systems fail, the consequences don’t stay in the server room.

He’s carried that instinct through more than twenty years of IT, from Blue Cross Blue Shield to IBM to Lenovo, where he is now field CTO and director of enterprise AI for North America. The problems have gotten bigger and the stakes sometimes harder to see, but the underlying principle has stayed the same. 

Right tool, right job, right time. Everything else is what happens when you get that wrong.

He spends his days inside organizations trying to live by this motto.

He will be presenting at the CIO Association of Canada Peer Forum, happening in Vancouver on April 15-16, where his session will explore why scaling agentic AI so often starts with auditing the two-decade-old processes organizations haven’t touched in years.

Lenovo’s own CIO Playbook 2026, which surveyed more than 3,100 IT and business decision-makers globally, found that only 10% of organizations say they are ready to deploy agentic AI at scale right now, while focus on it is growing more than 50% year over year. 

Other research shows only 25% have converted at least 40% of AI pilots into production systems. The ambition is outrunning the infrastructure, the processes, and in many cases the real self-awareness to realize that.

The organizations Encizo works with often sit at one of two extremes. Some are risk-averse by nature or by necessity. Others are moving fast and not sure if they should be. Neither camp is doing as well as it thinks, and the distance between them isn’t as large as it seems.

The first camp has a hesitancy problem. 

“They’re probably ready to do more with AI than they think they are,” Encizo says. “But they are hesitant because of regulatory or organizational hurdles.”

Then there’s the other camp.

“I’ve got some customers that are like, ‘We don’t need your help. We’ve got OpenClaw,’” he says. “When the chief data scientist from Meta can’t figure out how to keep it from deleting her emails, you guys that have been experimenting with this in production, think you’re ready to go. Unfortunately, we’re going to have a lot of early bleeders.”

What both sides share is a fairly optimistic read on their own position.

When ready isn’t ready

The organizations getting it right tend to have already done the work nobody wanted to do before any AI showed up.

They started experimenting early, ran small pilots that moved to production rather than staying in sandboxes indefinitely, and learned how things break before they tried to scale anything. 

Guardrails went in and they levelled up their people. The hard, often boring work of mapping workflows happened before any technology got layered on top.

“Those are the kinds of organizations where they’re in a spot where building agents starts to become something that they can do,” he says. “Because they understand managing these kinds of complex projects at scale.”

Most organizations skipping those steps run into a problem that isn’t as obvious as a system crash, but can be harder to fix. They’ve built systems to store information, but those systems aren’t capable of governing how decisions get made. 

With agentic AI in the mix, that can get messy.

“How do you map out decision making when decisions are made in computer time, not people time?” asks Encizo. “Our risk profile hasn’t caught up for that.”

What makes it harder is everything that was never written down.

Mapping what AI is going to touch means going beyond documented processes into the institutional knowledge that lives in people’s heads.

“There’s a lot of corner cases, tribal knowledge that has to be pulled out,” Encizo says. “Like, why did we do this? Well, we did this because of this one thing that happened in this scenario.”

AI can simulate understanding context, he says, but it doesn’t understand why. In an autonomous system, that missing “why” becomes a risk.

It can be low stakes, like an agent cutting office supply costs that cancels the coffee order because nobody has filed a formal complaint about the coffee. The reason no one was complaining about the coffee was because it was already there, but the agent didn’t connect those dots. 

Or it can be high stakes, like an agent managing medication orders that stops reordering a drug because the prescription rate dropped, without knowing the prescription rate dropped because the drug was out of stock.

When agents fail, they fail as a team

Agent swarms are having a moment. 

The idea of multiple AI agents communicating, dividing up tasks, and handing work to each other sounds like the kind of efficiency that gets a standing ovation at a conference. The frenzied buzz of a swarm like this is also, it turns out, hard to stop once something goes wrong.

Agentic AI runs into the same problem as all LLMs, Encizo says, where they give you “sort-of-truths.”

“And now imagine you have a sort of factual answer with one agent that’s handing off an operation to an agent upstream,” he adds. “That agent takes that sort of truth that’s still valid from a formatting perspective, and now it does work on that, and amplifies that small error forward.”

He compares it to deflecting a meteor. 

You don’t need a dramatic intervention. If you nudge it far enough out, a tiny change in trajectory compounds into a miss over distance. Inside a multi-agent system, a small error works the same way, except now it’s heading toward you.

“How do I put in circuit breakers so that when decisions may start having cascading impacts, we know how to shut things down appropriately,” he says.

Most organizations haven’t built those circuit breakers, or even asked if they need them.

The data problem

The conversation Encizo has most often with organizations trying to get AI off the ground goes in circles. The data is a mess. They need AI to fix the data. But they need the data fixed before they can use AI. He compares it to the chicken-and-egg paradox.

Harvard Business School researchers spent five years studying AI-generated work schedules at large retail chains. The schedules were, in the researchers’ own words, effectively useless. The AI was fine. The data about when employees could actually work was wrong, and nobody had fixed it before handing it over.

Encizo’s answer is to stop trying to solve everything at once. Pick one workflow, map it end to end, raw materials in, finished output out, and fix the data as you work through it.

The reason data governance projects rarely produce results, he says, is that the ROI is almost impossible to point to. Nobody can say we spent eighteen months cleaning data and here is what it cost and here is what it produced. 

So the project stalls, the data stays messy and the AI waits.

“If all I do is I optimize one little component of the manufacturing line, all I do is I create bottlenecks elsewhere in the system,” he says.

The same logic applies to AI deployments that touch only part of a workflow. The system doesn’t get better, it just moves the problem somewhere else.

The view from across the border

When asked whether Canadian organizations approach this differently than their American counterparts, Encizo pauses. He is based in the U.S. and works across North America, so he has a view on this from the outside.

“I think that there’s a much more measured approach,” he says of Canadian organizations, and maps it squarely onto the two camps he described at the start. “There’s a lot more of a measured pace, as opposed to, I swear everybody in the States feels like if they haven’t implemented 10 AI projects by, you know, the end of the quarter, they’re behind. And a lot of times that’s causing chaos.”

The two camps he described at the start of this conversation, the ones moving too fast and the ones not sure they’re moving fast enough, are playing out on a continental scale. Canada, in his experience, skews toward the second. The US skews hard toward the first.

He’s been watching organizations adopt technology they didn’t fully understand for more than two decades, and he muses that the measured pace may be the right call for this particular moment.

“I’m a surly, recalcitrant old man, as my wife will tell you,” he says. “I think right now, we are squarely in a hype cycle for a lot of these technologies.”

The potential is real, but the industry’s desire for agentic AI has outpaced its readiness to actually deploy it well, he says. Painful lessons are coming regardless of how fast anyone moves.

“You can’t build up a corpus of knowledge on best practices and how to do things correctly in four years,” he says. “[It] just doesn’t happen.”

Spam filters used to be a selling point, he says, something vendors actually led with. Nobody leads with spam filters anymore because nobody thinks about them anymore. They’re just part of how email works. 

He thinks agentic AI will get there, too. He just doesn’t think most organizations are as close to that moment as they think they are.

The ones that succeed, he says, won’t be the ones that moved the fastest. They’ll be the ones that leveraged their partners, mapped their workflows, and understood what they were building before they built it.

“What’s the old adage?” says Encizo. “If you want to run fast, run alone. You want to run far, run together. That I think, particularly right now, is critical.”

Final shots

Lenovo’s CIO Playbook 2026 found only 10% of organizations are ready to deploy agentic AI at scale, while Deloitte’s State of AI 2026 found only 25% have converted at least 40% of pilots into production. The ambition is real, but the infrastructure is not keeping pace.

The biggest risk in multi-agent deployments is a small error that compounds across agents, until it becomes something much harder to reverse. Most organizations haven’t built the circuit breakers that would catch it.

The organizations furthest ahead started small, moved pilots to production, and learned how things break before they tried to scale. That sequence matters more than the technology they chose.

Digital Journal is the national media partner for the CIO Association of Canada.