In November 2024, Anthropic introduced Model Context Protocol (MCP), an open source abstraction layer designed to streamline how large language model (LLM) applications connect and interact with various data sources and systems—particularly for agentic use cases.
Functioning as a "one-to-many" abstraction layer, MCP aims to accelerate the development of dynamic LLM-powered tools by establishing a broader, standardized interface. Although MCP has existed quietly for some time, its adoption and recognition surged around March 2025.
So why the hype now? Initially met with skepticism—many dismissed it as just another “API for APIs”—MCP gained significant traction after competitor OpenAI publicly announced it would adopt Anthropic’s standard rather than build its own. Despite ongoing security concerns (explored below), the growing consensus is that a unified abstraction layer could be a critical leap forward in simplifying and scaling model interactions across platforms.
Prior to MCP, various tools had to be created to do function calling with LLMs. The challenge was then ensuring that each LLM treated these function calls equally. With a standardized protocol like MCP, this issue is abstracted across all the LLM vendors that adopt it. We expect many more to do so in the near future beyond just Open AI and Anthropic. Instead of engineering bespoke solutions for each data source or tool, developers can now rely on a shared foundation of MCP to simplify development and reduce long term maintenance overhead. Interoperability and speed of developing complex applications is really key here.
MCP acts as an abstraction layer between the LLM and the downstream services a user wants to interact with. It enables the LLM to generate a function call that can be executed by a downstream service, utilizing a user query enriched with context about the available tools for that service. MCP does this by augmenting the user prompt with relevant tool context before passing it to the LLM, enabling the LLM to generate a precise function call to execute an action or retrieve information. It effectively enables LLMs to stay contextually aware across multiple downstream services, thanks to features like real-time state synchronization and intelligent session handling. This makes it easier to build dynamic, responsive AI applications that can interact seamlessly with a wide range of environments with minimal effort for the LLM application developers.
Below is a breakdown of what this looks like in a typical MCP-enabled architecture:
These integrations are also community-driven, with many available for use or contribution through open source repositories like this one on GitHub, fostering collaboration and accelerating agentic LLM application adoption in the market.
We believe it's often easier to demonstrate an attack in a simple, naïve setup than to build a strong defense. In the case of MCP security, we’ve seen articles and tools (i.e., OWASP and Cloudflare) ranging from thoughtful and practical explorations of missing enterprise-ready features to headline pieces with no novel security risks—such as hypothetical attacks involving malicious MCP servers with hidden injections.
While AI is a new domain, many of the security issues it surfaces are not. In most cases, traditional security practices still apply. Supply chain security, code scanning, and version pinning continue to solve many of the risks raised in these discussions and articles.
That said, there are important security considerations when building LLM applications with MCP:
Security concerns around MCPs are valid, but we see MCP as a major opportunity for LLM security vendors. Standardization marks a turning point in most security domains. This has been proven with efforts like OCSF and OpenTelemetry. With broader adoption on the horizon, MCP presents a single point of integration to gain visibility into tool usage and trace agentic behavior across applications.
Here’s how we see the future of MCP security unfolding and what you need to keep in mind as you move forward:
Want to learn more about how we secure MCP-powered, agentic LLM applications through deep discovery and runtime analysis? Book a demo ->