We all remember the first time we beheld the majestic power of generative AI. It plans vacations! It drafts my emails! It writes my essays! … then you accidentally include “Would you like me to soften the breakup message I drafted for you to be less confrontational?” in the text you send to your now ex- and highly offended partner, and you realize quickly the glaring limitation that a large language model (LLM) has on making you more productive. The model could give you the words, but it couldn’t act on them to fix your problems. And so, agents came along, which we thought would fix the inefficiency of copying and pasting a text response. But technically, these tools were hard to scale because every connection was custom-built, one at a time. Want Claude to talk to Slack? Build a custom bridge. Want ChatGPT to talk to Google Drive? Build another custom bridge. In reality, these tools weren’t scaling in the way we thought would drive efficiency. Your dreams of building an autonomous breakup robot were just not coming to fruition.
That is until Anthropic came up with a solution. Enter the Model Context Protocol (MCP), a standardized language that allows integration of LLMs into existing data source and application structures.
The MCP provides a structured framework to securely and predictably connect AI models to external tools and data sources. The capabilities MCP unlocks can make an AI system feel autonomous, which leads to the common misconception that MCP connectors and AI agents are the same thing. They aren’t. In the following article we will (1) unpack the basics of MCP and what makes the technology special, and (2) explain how MCP facilitates the use of, but remains distinct from, AI agents as a technological concept.
A Primer on the Model Context Protocol (MCP)
What Is MCP?
The most prevalent analogy is to compare MCP connectors to USB-C technology. In the same way that a USB provides a standardized way to connect various devices to a computer, the MCP is designed to standardize how AI models connect to and interact with external systems. Like a personal computer might use a USB to connect to a camera, allowing the computer to capture video and subsequently broadcast that video via a stream, the MCP allows an LLM to connect to a broader set of systems, gather data or interact with those systems, and perform actions that it could not complete alone, because it standardizes the communication between the two endpoints.
An MCP expedites any developer’s ability to take LLMs from a smart-but-isolated brain into something that can actually reach out and interact with the world in a scalable, standardized way. Where traditional API integrations necessitate that the developers make programmatic calls, MCP is designed for AI models to use dynamically and independently during inference. This creates a formal interoperability layer for AI systems allowing MCP to function as a “middleware” standard for AI-to-system communication.
The flow is as follows:
A user prompts an LLM to perform a task—let’s call it “build the quarterly report.”
The LLM queries the MCP server to pull the data for this quarter.
The MCP server, in MCP’s standard language, talks to an ERP tool, typically via an API, asking for the data.
The ERP tool returns the file to the MCP server.
The MCP server volleys the information back to the LLM, which drafts it into a human-readable quarterly report.
Each external tool connected to the LLM has its own MCP server. In the USB analogy, this is like your mouse, your keyboard and your digital camera all having their own plug in to the computer.
What are MCP Connectors?
While MCP is merely the protocol itself, references to “MCP connectors” typically refer to a pre-built MCP server that has already been packaged up and is ready to plug in for a specific tool or service. It is also common for companies to package the MCP server with the specific tool or service itself, creating the potential for further confusion. However, the MCP connector is distinct from the AI application itself. For the purpose of this article, MCP connectors mean the entire architecture that companies like Anthropic and OpenAI are marketing as a product feature to both customers and application owners, but not the underlying AI applications themselves.
What Makes MCP Unique?
Under the pre-MCP framework, an individual user, leveraging an agent, would connect to a vendor’s software via API. The agent would then use the API to make calls against that software to perform actions (e.g., pulling a set of data, or sending a message). Even though this API connection is done via an AI model, the AI model is acting pursuant to an individual request by a single user, then making an API call to take action.
The MCP introduces a standard wrapper that allows an AI model to call software without needing an explicit human instruction or individual API connection. This allows the AI model to interact freely with the software, and to access relevant data in the connected system without needing to make a specified API call.
The key aspect that changes when implementing the MCP connection, rather than the individualized API construction necessary before standard protocol was established, is the connection. It’s important to note that the underlying software solution that is being connected to does not change (e.g., when you connect to Microsoft Outlook via API vs. via MCP the actual functionality of Microsoft Outlook does not change).
MCP Primitives and What Makes MCP Distinctive
The conceptual heart of the MCP is its “primitives,” the standardized components that define how models interact with external systems. The three core primitives are resources, tools and prompts.
Prompts
Prompts in MCP are reusable, pre-built instruction templates that the user invokes in order to instruct the LLM on how it should interact with the tools or resources. These are sets of user-controlled instructions to standardize interactions and inputs.
This allows users to interact with models consistently, instead of inputting ad hoc instructions for each model interaction. For example, a standardized template for compliance review with structured inputs based on the nature of the review and controls being confirmed.
Resources
Resources represent retrievable data sources made available to the model that can provide contextual information. They provide contextual grounding, which allows the model to retrieve information in a consistent way. Resources are driven by the application endpoint.
Resources enable models to operate with consistently updated enterprise information rather than relying solely on training data or individually uploaded context. For example, reading files across an organization’s databases to retrieve documents or specific information.
Tools
Tools are executable functions the model can invoke to perform actions directly. For example, creating a support ticket, updating a database entry or making an API call.
Once given access to a tool, that tool will be controlled by the model. When a model determines that an action is required (either because of an automated process or due to direct user input triggering such action), it will invoke that tool. The host system can then validate, execute the function, and return a response. The model has the power to use the tool—meaning there is no human operating the lever. However, tools can include approval processes that require the model to get approval before using a tool to execute an action.
This structure significantly improves reliability compared to model outputs attempting to simulate API calls. APIs are not designed for use specifically by AI, and each API requires custom integration. Further, since API documentation is designed for humans, it often lacks context that AI agents need to properly call the API. The MCP creates a standard way for the AI to communicate with external systems avoiding these issues with API calling.
Other Primitives
Some MCP implementations include further primitives that allow richer interaction between the AI model and the external service, but are less common and significantly less mature. The three primitives above assume that the AI is exclusively consuming what the MCP server offers, and there is no ability for the application endpoints to alter the course of action. Sampling, a fourth possible primitive, could be invoked, allowing the MCP server to reach back and ask the LLM to help decision-making on what to do next. Other possible primitives are elicitation and logging.
MCP Connectors Vs. Agentic AI—Related but Distinct
The increased autonomy that MCP connectors give to LLMs causes a blurring of the line between agentic and non-agentic AI. However, an MCP connector is not an AI agent, in and of itself. Agentic AI describes systems capable of multi-step reasoning, planning and autonomous task execution. AI agents have autonomy and independent decision-making capabilities. MCP connectors, on the other hand, do not create autonomy; they provide structured capability access for the existing AI system (whether agentic or otherwise). While often used within agentic systems, MCP connectors and AI agents are conceptually and architecturally distinct. The MCP can be thought of as a standardized set of “hands” and “eyes” that can be used by the AI system acting as the “brain.”
Where the AI system is an autonomous agent, the agent decides what to do proactively and the MCP translates the agent’s interactions with external systems. Where the AI system is a non-autonomous application, the user decides what to do and the MCP then translates user instructions into interactions with the external systems in a reactive way. In both cases, an MCP connector is a passive tool for the AI to use, that allows for increased efficiency and predictability in interacting with an external system.
Conclusion
Creation of the MCP, and the proliferation of MCP connectors, reflects the increasing desire for standardization in AI infrastructure in order to scale its use. Understanding its architecture and how it relates to other AI technology, like agentic AI, is imperative to responsible and informed AI use.
In our next post, we will examine the operational and legal risks of using MCP connectors.
[View source.]
