Key Takeaways
- MCP gives LLMs real-world capabilities, but that power comes with a widening attack surface. The Model Context Protocol standardizes how AI models access external tools and services. Without identity and access controls baked in, those capabilities can quickly spiral into risk.
- Nearly every MCP security risk shares a root cause: a lack of user context. When tool calls happen with insufficient context, there’s no reliable way to enforce security policies. LLMs end up operating untethered from the identity frameworks that govern the rest of the enterprise.
- The core threats include unchecked tool invocation, malicious tool definitions, unsecured API endpoints, and missing audit trails. Models can gain access to actions they should never perform, attackers can inject poisoned tools through uncontrolled registries, and tool calls often happen with no record of who requested them or why.
- The fix is weaving identity directly into MCP workflows. Strata pairs every tool request with the person behind it, along with their role, permissions, and risk level. If the identity doesn’t line up with what’s allowed, the call stops there.
Large language models (LLMs) aren’t just answering questions anymore. They’re booking travel, crunching data, and even pulling the levers of other software on your behalf. This shift is quietly building an entirely new layer of digital infrastructure. At the center of it is a standard with big implications: MCP — the Model Context Protocol.
As the brain behind LLMs, MCP provides a standardized way for LLMs to access external tools and services. But with that power comes a widening attack surface, plus an urgent need to rethink how identity, access, and control should work when the user is no longer human (and the actions are automated at scale).
Before exploring how to secure MCP workflows, let’s start with the basics.
What is MCP?
MCP, or Model Context Protocol, defines how large language models interact with external tools in a structured, predictable way. Instead of leaving models to guess how a function works — or worse, giving them direct access to a backend service — MCP acts as a standardized interface between the model and the tool.
It tells the model exactly which tools exist, how to use them, and what kind of information they need to work. Think of it as handing the model an instruction manual for each tool, complete with a parts list and a clear set of steps. That way, developers can open up useful capabilities without giving the model free rein to poke at things it shouldn’t.
How MCP works
The promise of agentic AI isn’t limited to clever prompts. It depends on giving models real-world capabilities through tools and APIs. However, granting that kind of access without guardrails is risky, which is why MCP was created.
The Model Context Protocol works a bit like an air traffic controller for AI. It decides which “flights” (tools) a model can call, how those flights are described, and what information they need before takeoff. In practice, it’s the glue holding an LLM’s capabilities together with the outside services it needs to be useful.
Picture it in action:
- Someone asks an LLM to handle tasks like booking a Friday flight to New York.
- The model understands it will need to talk to a flight‑booking tool.
- It checks in with an MCP server to see what’s available and how to call it.
- Using that blueprint, it builds a structured request.
- The tool runs the request, sends back the details, and the model wraps it into its final answer.
This setup gives developers a decent amount of visibility and control over the model’s capabilities. However, it also opens a few new doors for attackers.
The security risks of MCP
New protocols often bring new problems, and MCP is no exception. By design, it allows models to call powerful external tools. But without protections, those capabilities can quickly spiral into risk.
Think of it this way: MCP turns the model into an orchestra conductor. But if you don’t vet the instruments or the sheet music, you could end up with chaotic noise.
Here are some of the most pressing issues:
1. Over-permissive tool access
If tool invocation isn’t governed by context or user identity, a model may gain access to actions it should never perform — like deleting databases, sending money, or changing permissions.
2. Untrusted MCP sources
In distributed environments, LLMs may query MCP registries that are not controlled by the developer. That opens the door to malicious tool definitions that masquerade as legitimate functionality.
3. Insecure endpoint exposure
Even with MCP in place, the tools themselves often live behind APIs. If those endpoints aren’t properly secured, an attacker can skip the model entirely and interact with the service directly.
4. Lack of auditability
Tool calls often occur without logging who requested them, what identity was in play, or why the call was made. That makes post-incident analysis nearly impossible.
5. Missing approval workflows
Some tools are powerful enough that their use should require human approval. If that layer doesn’t exist, the model becomes the final authority on actions that may carry real-world consequences.
Why identity context is the missing piece
Nearly every risk shares a root cause: a lack of user context. Traditional systems rely on user identity, roles, and policy enforcement to make access decisions. But in MCP-driven workflows, that context is often absent or worse, simulated in an insecure way.
Without knowing who a request is being made for, what their entitlements are, or how risky the request might be, there’s no reliable way to enforce security policies. So, LLMs will operate in a vacuum, untethered from the identity frameworks that govern the rest of the enterprise.
How Strata Identity secures MCP workflows
Strata Identity weaves identity checks directly into MCP-driven environments so that access decisions aren’t left to chance. Before anything runs, every request to use a tool gets paired with the person behind it — along with their role, permissions, and current risk level. If the identity doesn’t line up with what’s allowed, the call stops there.
Here’s how it works:
1. Identity-aware tool invocation
Every MCP function call is evaluated in real time against the user’s identity — roles, groups, entitlements, and risk profile. No identity match? No tool access.
2. Runtime policy enforcement
Rather than relying on static permission lists, Strata evaluates dynamic policies during execution to determine whether a call is allowed based on who’s behind it and what the context is.
3. End-to-end visibility
Strata links each tool invocation to a verified identity, providing a complete audit trail for every action taken—down to the prompt, user, and function response.
4. Just-in-time approvals
For sensitive actions, Strata inserts approval gates that pause execution until a human reviewer signs off. These gates are tied to user roles and contextual risk.
5. Trusted registry management
Strata integrates with MCP registries and enforces a curated list of known-good tool definitions. If a tool isn’t vetted and tied to policy, it can’t be invoked.
The future of MCP is identity-aware
MCP will be foundational for how LLMs interact with the world, but only if it’s done securely. And security, at its core, starts with identity. Without identity context, it’s impossible to say whether a function should be executed, whether the user is authorized, or whether the request is safe.
Strata Identity weaves identity checks directly into MCP-driven environments so that access decisions aren’t left to chance. Every request to use a tool gets paired with the person behind it, along with their role, permissions, and current risk level before anything runs. If the identity doesn’t line up with what’s allowed, the call stops there.
If you’re building with LLMs and exploring MCP, now’s the time to embed identity before things scale out of control.
Ready to see how it works? Get early access to Maverics Identity Orchestration for AI Agents.