Agentic AI systems have the power to fundamentally shift how organizations work, but only if they’re embedded in workflows.

When treated as a traditional tech deployment, organizations limit agentic AI’s full potential due to the significant structural changes that meaningful adoption requires. For agentic AI to deliver real impact, organizations must ensure they have the right foundation for the technology.

Companies must rethink their governance capabilities, define clear objectives and guardrails for agents, and implement policies to maintain human accountability.

At the same time, leaders must give agentic AI the space and framework to contribute to transformation, or risk missing the meaningful gains from the technology.

“Realistically, it’s very easy to confuse nature and implementation,” said JJ Lopez Murphy, head of data science and AI at Globant.

He explains it’s easier to tick boxes that you “did it” if the scope is small enough that most of the organizations don’t need to react, rather than facing that the nature of the work being done has to be upgraded.

Related:5 execs on the niceties of AI agent orchestration

“It is also a hedge against overarching training programs that overload people with materials and meetings, but with no meaningful change or impact in how people operate, at a huge cost to the company,” Murphy said.

Michael Bevilacqua, vice president of AI product management at Adeptia, says the critical skill is supervised autonomy — knowing when AI should do the work and when humans should ensure quality.

“With AI-powered data mapping, for example, AI handles 80% of the analysis, but humans review confidence scores and approve outputs,” he said.

He adds employees must understand the agent is a first-class user of enterprise systems, not a chatbot, and that shifts the mental model from “asking questions” to “reviewing decisions.”

“The behavior change is moving from doing the work to supervising the work — and knowing when to intervene,” Bevilacqua said.

Murphy says some fluency in prompting has become the bare minimum, and beyond basic prompt usage, employees must be able to talk properly with agents.

“When interacting with, coordinating and managing an agent, communicating in terms the agent understands is key,” he said.

This combats misunderstandings between agent and human and ensures productive interactions. From an accountability standpoint, teams must also be educated on how to avoid potential data privacy and ethics concerns.

“While agents can assist, human employees maintain the responsibility to govern, manage and audit inputs and outputs,” Murphy said.

Related:Governance lags as shadow AI reshapes collaboration workflows

Human resources (HR), IT and learning and development (L&D) leaders all have a role to play defining when and how agents are used in daily work.

“IT will likely play a central role in accountability and governance, particularly in enablement and ease-of-use roles, but it should not bear full responsibility,” Murphy said.

Because agents interact with enterprise systems and data, IT must establish guardrails, access controls and oversight, while business leaders are best positioned to define where agents add value in real workflows.

“HR or L&D can help build the skills needed to use them effectively,” he said. “What will matter the most is clear communication and coordination across those roles.”

In the broad strokes, IT provides governance and technical structure while business teams define outcomes and learning functions to ensure responsible use.

“When these teams work together, agent adoption will become intentional and sustainable rather than fragmented or reactive,” Murphy said.

One of the biggest shifts is moving beyond one-time AI training and focusing on workflow integration. Employees need to understand how to supervise models to refine outputs and take responsibility for the results.

Related:5 execs on how AI and human agents can work together

That requires practical guidance, so employees understand how to reassess outputs and take responsibility for correcting them when needed.

When training employees to trust, supervise, and challenge agent outputs, Bevilacqua says transparency is essential.

“When employees can see the agent’s reasoning — which systems it queried, what transformation logic it applied, what confidence score it assigned—they build appropriate trust,” he said.

He cautions AI without knowledge is just fast guessing, pointing out organizations succeeding with agents show employees not just the output, but also the institutional knowledge and customer-specific context that inform each output.

“That makes supervision meaningful, not theatrical. Trust comes from understanding, not from blind acceptance,” Bevilacqua said.

Murphy says organizations are recognizing that AI fluency is developing in layers.

“Every employee needs a baseline understanding, but more advanced use of agents requires deeper practice and hands-on experience in real workflows,” he said.

He notes support structures also matter just as much as skills: When leadership reinforces expectations and teams share what works, AI will naturally become a part of how work gets done rather than an isolated tool.

“Trust will be built among employees when they feel confident in guiding the technology and shaping its output, rather than just accepting the results it produces,” Murphy said.

Organizational success will shift from measuring training completion to measuring real outcomes.

From Murphy’s perspective, the real signal will not be how many employees took a course or opened an agentic tool but will be measured by whether teams are delivering work more efficiently and producing high-quality outcomes with more confidence.

“Organizations will assess how agents are embedded in workflows and whether that integration leads to measurable gains in quality and efficiency,” he said.

There should be faster feature delivery, more structured planning and greater independence in execution with agent support — shifts indicating that training is translating into real capability rather than surface-level adoption.

“Over the next few years, adoption will also become visible in behavior,” Murphy added. “When employees naturally integrate agents into their daily workflow and start taking responsibility for refining the output, that’s when experimentation will turn into sustained productivity.”