Protect AI | Blog

MCP Security 101: A New Protocol for Agentic AI

Written by Neal Swaelens and Oleksandr Yaremchuk | Apr 8, 2025 5:31:36 PM

In November 2024, Anthropic introduced Model Context Protocol (MCP), an open source abstraction layer designed to streamline how large language model (LLM) applications connect and interact with various data sources and systems—particularly for agentic use cases. 

Functioning as a "one-to-many" abstraction layer, MCP aims to accelerate the development of dynamic LLM-powered tools by establishing a broader, standardized interface. Although MCP has existed quietly for some time, its adoption and recognition surged around March 2025. 

So why the hype now? Initially met with skepticism—many dismissed it as just another “API for APIs”—MCP gained significant traction after competitor OpenAI publicly announced it would adopt Anthropic’s standard rather than build its own. Despite ongoing security concerns (explored below), the growing consensus is that a unified abstraction layer could be a critical leap forward in simplifying and scaling model interactions across platforms.

What Is MCP?

Prior to MCP, various tools had to be created to do function calling with LLMs. The challenge was then ensuring that each LLM treated these function calls equally. With a standardized protocol like MCP, this issue is abstracted across all the LLM vendors that adopt it. We expect many more to do so in the near future beyond just Open AI and Anthropic. Instead of engineering bespoke solutions for each data source or tool, developers can now rely on a shared foundation of MCP to simplify  development and reduce long term maintenance overhead. Interoperability and speed of developing complex applications is really key here.

How It Works

MCP acts as an abstraction layer between the LLM and the downstream services a user wants to interact with. It enables the LLM to generate a function call that can be executed by a downstream service, utilizing a user query enriched with context about the available tools for that service. MCP does this by augmenting the user prompt with relevant tool context before passing it to the LLM, enabling the LLM to generate a precise function call to execute an action or retrieve information. It effectively enables LLMs to stay contextually aware across multiple downstream services, thanks to features like real-time state synchronization and intelligent session handling. This makes it easier to build dynamic, responsive AI applications that can interact seamlessly with a wide range of environments with minimal effort for the LLM application developers. 

Below is a breakdown of what this looks like in a typical MCP-enabled architecture:

  1. When a user writes a prompt/query, the LLM retrieves tools from the MCP server before passing on the query. 
  2. The MCP adds the context of the tool options to the query and passes it on to the LLM.
  3. The LLM generates a function call based on the query and context provided by the query. 
  4. The application service passes on the generated function call to the MCP server which executes it in the downstream service via RestAPI. 
  5. The downstream service returns the output of the function call, which is used to augment the original query by the user and passes it on to the LLM to generate a final output. 

These integrations are also community-driven, with many available for use or contribution through open source repositories like this one on GitHub, fostering collaboration and accelerating agentic LLM application adoption in the market.

Security Risks of MCP

We believe it's often easier to demonstrate an attack in a simple, naïve setup than to build a strong defense. In the case of MCP security, we’ve seen articles and tools (i.e., OWASP and Cloudflare) ranging from thoughtful and practical explorations of missing enterprise-ready features to headline pieces with no novel security risks—such as hypothetical attacks involving malicious MCP servers with hidden injections.

While AI is a new domain, many of the security issues it surfaces are not. In most cases, traditional security practices still apply. Supply chain security, code scanning, and version pinning continue to solve many of the risks raised in these discussions and articles.

That said, there are important security considerations when building LLM applications with MCP:

  • Limited invocation controls: Most MCP servers, both official and unofficial, do not offer native mechanisms to restrict which downstream functions an LLM can access. Their design favors broad access, which can result in oversharing and excessive permissions. For example, the unofficial Salesforce MCP lacks authentication yet allows unrestricted access. This limitation is acknowledged and prioritized on the MCP roadmap.

  • Unvetted MCP sources: Just as you wouldn’t install untrusted PyPI packages, Docker images, or npm modules, you should not integrate with unverified MCP servers without thoroughly scanning and assessing them for security risks.

  • Standard API security measures: Non-stdio implementations of MCP require publicly accessible API endpoints to communicate with providers like OpenAI and Anthropic. Consequently, standard API security practices—including authentication, authorization, rate limiting, Web Application Firewalls (WAF), and other security measures—are essential to safeguard these publicly exposed services from potential threats.

  • Lack of observability: Without built-in monitoring, it becomes difficult to trace activity or correlate actions back to specific prompts. This makes auditing and incident response significantly harder.

  • No approval workflows: MCP currently lacks out-of-the-box, human-in-the-loop workflows for critical actions. There is no way for users or centralized IT and security teams to review and approve high-risk function calls before execution.

The Future of MCP Security

Security concerns around MCPs are valid, but we see MCP as a major opportunity for LLM security vendors. Standardization marks a turning point in most security domains. This has been proven with efforts like OCSF and OpenTelemetry. With broader adoption on the horizon, MCP presents a single point of integration to gain visibility into tool usage and trace agentic behavior across applications.

Here’s how we see the future of MCP security unfolding and what you need to keep in mind as you move forward:

  1. Protecting primitives is essential: Tools, as executable functions, present significant risks if improperly accessed or manipulated. Resources, which serve as data containers, require robust access controls and validation mechanisms to prevent data breaches or poisoning. Prompts, which guide model behavior, need protection against injection attacks and unauthorized modifications that could redirect AI outputs.

  2. Authentication must be enforced: In the MCP specifications, it is deemed optional, but it should be mandatory for non-stdio implementations. MCP needs a clear identity model. Identify who is making the call and define what tools from the MCP server are allowed. Map permissions per MCP server and scan accordingly.

  3. Context must drive enforcement: Security depends on knowing the full picture, i.e., the prompt, the user issuing it, and the resulting API call. This context can drive dynamic updates to permissions, and with it fine-grained control over allowed or disallowed tools.

  4. Runtime monitoring and tracing is critical: Visibility into every stage of an LLM’s execution path allows teams to trace behavior, investigate incidents, and maintain control over agentic workflows.

  5. Human-in-the-loop is required for critical actions: Insert approval gates where needed. Open source tools like HumanLayer can help enforce review workflows for sensitive downstream operations.

  6. Only use verified MCP servers: Adopt only official or vendor-approved MCP servers. This can be an internal policy to reduce the risk of introducing untrusted components into LLM production environments.

  7. Apply traditional supply chain security controls: Cryptographic signing, version pinning, and package verification should be applied to MCP servers and dependencies. These known practices should be rolled up and remain effective in preventing supply chain attacks within AI systems.


Want to learn more about how we secure MCP-powered, agentic LLM applications through deep discovery and runtime analysis? Book a demo ->