Artificial intelligence is moving from experimentation to operational deployment. The excitement around large language models (LLMs) introduced many organizations to what AI could do, sparking a wave of pilots and prototype agents.

But as enterprises push these systems into production, they’re encountering a fundamental constraint: general-purpose models lack the real-time operational context that enterprise decisions require.

Mike Hicks

Social Links Navigation

Principal Solutions Analyst for Cisco ThousandEyes.

chatbot can discuss financial regulations, but it cannot determine whether a specific trade violates internal policy. It can describe networking concepts, but it cannot diagnose why your application is slow right now without live telemetry. Simply put: AI is only as smart as the data and tools it can reach.

This gap is driving architectural changes across enterprise AI deployments. For enterprises, intelligence isn’t about broad answers, it’s about orchestrating precise, dependable action.

data sovereignty requirements.

Analysis of current workload patterns suggests that many agentic AI tasks could be handled by specialized SLMs, with larger models reserved for complex reasoning tasks.

In fact, research from NVIDIA and others indicates that many enterprise deployments combine a mix of SLMs and LLMs. But choosing the right model is only part of the enterprise AI challenge. For agents to act reliably, they also need a consistent way to access enterprise systems.

That’s elevating the importance of the infrastructure layer that connects reasoning to operational reality.

infrastructure layer is the Model Context Protocol (MCP), an emerging open standard that enables AI models to connect with enterprise data sources and tools through a uniform and secure interface.

You may like

Released by Anthropic in late 2024 and subsequently donated to the Linux Foundation’s Agentic AI Foundation (AAIF), MCP acts as a universal translator: exposing data, telemetry, workflows, and actions in a consistent, structured way.

This matters for three reasons:

Standardization makes large-scale agent ecosystems feasible. APIs vary across platforms and clouds; MCP abstracts that complexity so agents can access systems without bespoke engineering.Contextualization gives agents real-time visibility into an organization’s topology, conditions, and system state; allowing agents to query current conditions rather than operating on stale training data or approximations.Governance ensures safety. MCP’s architecture allows for guardrails that define which systems agents can access and what actions they can perform. With every action is auditable, the question becomes not “Did the agent respond?” but “Did it complete the task safely and correctly?”

business outcomes.

Enterprises need agents that understand their environment, access the right data, select the right tools, and operate within the right controls.

The combination of specialized models and standardized infrastructure protocols represents a maturation in enterprise AI architecture.

Rather than deploying general-purpose models for all tasks, organizations are building heterogeneous systems: SLMs handle domain-specific workloads, larger models address complex reasoning, and MCP provides standardized, contextual access. Together, they make AI both capable and trustworthy.

automation: an agent handling a network performance ticket could use MCP to access real-time telemetry from network monitoring systems, query historical incident databases, and execute pre-approved remediation workflows – all through standardized interfaces rather than custom integrations for each tool.

MCP’s structured access to enterprise tools and data enables a shift from information retrieval to reliable task completion. When an agent encounters an issue, say, a DNS failure, it can use the protocol to understand context, query additional data, and decide next steps rather than simply failing.

So when a major e-commerce platform experiences service degradation, a properly connected agent can correlate live performance with historical patterns and execute pre-approved remediations. What once required hours now happens in minutes, with full transparency.

Without context-aware infrastructure, agents can also fall into expensive loops with multiple models querying each other and consuming resources without progress. MCP prevents this by framing tasks around completion, not activity.

We’ve featured the best AI website builder.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro