By Mohammed Anzy S

In just one year, conversations in enterprise corridors have shifted from “getting GenAI live” to discussing AI agents. Leaders now track weekly behaviour changes of their AI agents and assess which models best assist customer decision-making. AI is now a vital operational layer that detects issues, escalates them, and improvises solutions. Its impact has been substantial. For instance, a mortgage application process that previously took two days is now completed in four hours, with an AI agent managing verifications and only flagging exceptions or escalating. Additionally, enterprise leaders’ roles have shifted toward making judgments, considering context, and building trust.

The potential for agentic artificial intelligence to function effectively is significant. However, there exists a risk of inconsistent behaviour across different teams, regions, and time periods, often accompanied by a lack of clear explanations.. This marks a turning point for enterprises, highlighting that the real challenge isn’t just deploying the model but managing governance during operation. As autonomy increases, controls are no longer just a checklist. The more autonomous these systems—agents that interpret context, decide, and act—the more enterprises shift from simply using AI to overseeing digital labour.

At higher levels of agentic maturity, where systems operate within guardrails and collaborate across workflows, governance becomes critical infrastructure. At these levels, three risks surface quickly: hallucinations escalate from minor errors to production incidents when agents trigger cross-system actions; inconsistent outcomes create a “trust tax” that slows scaling; and policy drift compounds as prompts, models, and data evolve.

Without structured supervision, incremental changes accumulate until systems operate on assumptions the organisation never approved. EY’s Outlook 2026 on agentic AI captures this roadblock on a macro level. It states that India is moving from pilots to scaled adoption, with 47% of organisations operating multiple GenAI use cases and 10% scaling across functions. Yet 64.5% cite data governance and security as “very severe,” and 78% struggle with system integration. These numbers show that while enterprise ambition is high, the smooth operationalisation of that ambition is yet to catch up.

The AI Control Tower: one place to see, steer, and prove

The need for an operating system for governance calls for what I refer to as an ‘AI Control Tower’. It is essentially a central governance layer that provides continuous visibility and control across models, agents, data flows, policies, and outcomes. In practice, a Control Tower should operate as a disciplined command centre. For instance, a model and AI agent inventory provides real-time information on what is deployed, where, with which permissions, and with which tools. Similarly, policy-as-code and versioning can serve as guardrails, expressed as enforceable controls. Especially in regulated settings, security and compliance alignment must help the enterprise demonstrate control, not merely claim intent.

Why GCCs are positioned to lead this shift

If the next decade of enterprise AI is about disciplined execution,  GCCs are structurally advantaged because they are built for operational maturity. GCCs already operate as cross-functional engineering hubs managing cloud platforms, data infrastructure, cybersecurity, and DevSecOps across time zones. The muscle required for an AI Control Tower ( continuous monitoring, version control, escalation paths, compliance logging, etc.) is already embedded in their operating DNA.

If we look at India’s GCC ecosystem, we will see that it is no longer a “support desk” story. Government sources report that over 1,700 GCCs, with a combined revenue of $64.6 billion (in FY24), employ over 19 lakh people. Establishing supervised “AI command hubs” to support this scale will therefore be a key model for centralised ownership and continuous operations. When infrastructure, data controls, and governance are designed to be auditable, scaling becomes less chaotic.

This is where GCCs can help move from delivery to design authority. They can build AI control towers as enterprise-grade products, run them with SRE (Site Reliability Engineering) discipline, and institutionalise the roles that make them work.

Scale is won in embedded workflows

Scalability comes from integrating workflows, optimising processes, and having effective governance. In essence, the organisations that succeed are not necessarily those that experiment the most, but rather those that can implement solutions rapidly while keeping control. Ultimately, clarity and control build the confidence required for substantial investment. The next phase of Global Capability Centres (GCCs) will concentrate on building and overseeing AI Control Towers. These towers will ensure AI systems stay manageable and operate efficiently and safely, delivering reliable performance.

 

 

(The author is Vice President and Managing Director, Guidewire India, and the views expressed in this article are his own)