IDC – Agentic AI Governance: When AI Becomes Critical Infrastructure
Skip to main
Skip to search
Skip to footer
Artificial Intelligence and DaaS
March 16, 2026
5 min
The McKinsey/Lilli incident should be read as a market signal. Once a system can see proprietary knowledge, shape work products, and connect to tools, it stops being a productivity layer. It becomes part of the operating core.
That matters because most companies are still thinking about AI risk in yesterday’s terms: data leakage, bad outputs, brand reputation damage. Those are serious issues, but the bigger risk becomes delegating authority to AI systems.
Implications now and next
Right now, most enterprises are still in a relatively contained phase. AI drafts, summarizes, searches, recommends. When something goes wrong, the damage is often painful but contained. A team wastes time. A document is exposed. A workflow stalls. Trust takes a hit.
The practical implication is clear: the severity of failure scales with an agent’s capabilities and permissions.
In a future shaped by a well-established agent economy, the perspective must evolve.
When AI agents end up touching X% of knowledge work, Y% of customer interactions, and Z% of routine approvals, the risk profile changes completely. The issue is not just whether an attacker can see something. It is whether they can quietly influence what the business does. This is the shift boards and executive teams need to understand.
In that future, the most important question will no longer be, “Was data exposed?” It will be, “What decisions were shaped by a compromised system?” If an agent can reprioritize work, alter recommendations, steer analysts toward the wrong evidence, or trigger downstream actions, then integrity matters as much as confidentiality.

With that in mind, any vendor or buyer building a Lilli-like platform should now assume they are operating decision infrastructure, not productivity software.
Recommendations for CIOs and CISOs
For CIOs, the priority is to run AI as a governed platform. Standardize approved frameworks, connectors, and protocols; separate confidential and public data; make agents inherit the user’s permissions rather than granting broad ambient access; and maintain a live inventory with owner, version, data sources, and external tool access for every production agent.
Just as important, design for bounded autonomy: default to read-only assistance, push high-impact or irreversible actions behind scoped tools and policy engines, and report three numbers monthly:
% of agents with write access,
% of critical workflows requiring human approval,
X minutes to disable an agent, model, or connector.
For CISOs, the core mistake to avoid is treating AI as a feature instead of a system. Threat-model it with AI-native frameworks such as MITRE’s Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS), National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF), and Open Worldwide Application Security Project (OWASP)’s GenAI and agentic guidance, then red-team the whole chain: inputs, retrieval, memory, tool use, rendering, and agent-to-agent handoffs.
The control baseline should be familiar but upgraded for agents: least-privilege tools, isolated memory, validated inputs, sanitized outputs, full logging of tool calls and decisions, security operations center (SOC) integration and an AI-specific incident playbook with a real kill switch.
This is one of the few cyber investments with a measurable business case. And one to keep in mind if you have a chance to review our Future Enterprise Resiliency and Spending (FERS) survey series. In the Future Enterprise Resiliency and Spending Wave 10 survey, from January 2026, you will notice that organizations are already funding AI/agent security and governance at near parity with the rest of the AI stack, accounting for 16.7% of planned AI investment on average worldwide.
What CEOs should tell their boards
CEOs should tell boards three things:
First: “We are not just deploying AI; we are delegating authority.”
Second: “Success will be measured by controlled autonomy, not by agent count.”
The board pack should show:
number of production agents
number with external tool access
percent with high-impact permissions
percent under red-team coverage
mean time to revoke access
third-party dependencies per critical workflow.
Third: “This is now a supply chain and resilience issue.”
In other words, a weak link in any model provider, cloud platform, connector, or data source can disrupt core workflows, not just expose data.

Once again, our FERS survey series can offer some context as we evaluate these three statements. In the Future Enterprise Resiliency and Spending Wave 8 survey, from October 2025, security, risk, and compliance was among the areas most immune to budget reduction worldwide at 25%, and 29% of organizations ranked “Enhancing cyber recovery and resiliency” among the top areas for significant budget increases in 2026.
To close, let’s consider the opposite perspective.
The right board question is no longer, “How many agents do we have?” It is, “How much authority have we delegated, to which systems, under what controls?”
Boards do not need more demos. They need evidence that management can set clear bounds on autonomy, observe it continuously, and shut it down fast.

Alessandro Perilli
– Vice President, AI Research – IDC
Alessandro Perilli is a Vice President leading the Agentic AI Platforms and Strategies research program. Alessandro’s core research coverage includes emerging AI technologies, global AI market trends, the state of enterprise AI, and sovereign AI.
Subscribe to our blog
Stay ahead in a rapidly evolving market. Get timely insights, trusted research, and actionable guidance to help you make smarter technology and business decisions — whether you’re driving innovation, evaluating solutions, or shaping strategy.
IDC Environmental Policy
International Data Group is committed to protecting the environment, the health and safety of our employees, and the community in which we conduct our business. It is our policy to seek continual improvement throughout our business operations to lessen our impact on the local and global environment. We are committed to environmental excellence, pollution prevention and to purchasing products that reduce the use of natural resources.
We fulfill this mission by a commitment to:
Encouraging all partners to share in our mission
Understanding environmental issues and sharing information with our partners
Recognizing that fiscal responsibility is essential to our environmental future
Instilling environmental responsibility as a corporate value
Developing innovative and flexible solutions to bring about change
Using our platforms and position in the IT industry to promote sustainability
Minimize air travel to help reduce our impact on the environment
Minimize use of materials and energy consumption in our offices
Create a working environment that efficiently uses our office space
Develop and maintain a hybrid working model that benefits both our employees and business partners
Encourage employees to measure, minimize and collaborate on reducing energy consumption at home and in the office
Engaging employees and promoting active participation in environmental and sustainability initiatives
×
Leaving?
You are about to leave this section. Do you wish to continue?
Cancel
Continue
×