MCP Won the Standards War. Now It Needs a Security One.
The protocol connecting AI to your business data just got donated to a standards body — because it needed one. Here's what tool poisoning means and what to do about it.

In late March, Anthropic donated MCP — the Model Context Protocol — to the Linux Foundation under a new group called the Agentic AI Foundation. OpenAI and Block co-founded it. Within weeks, over 2,300 public MCP servers existed in the wild, with more than 200 tools supporting the standard.
The standards war is over. MCP won. Every major AI provider supports it. If your business uses AI tools that connect to databases, calendars, CRMs, or file systems, MCP is almost certainly the plumbing underneath. We covered the cost implications of all that plumbing earlier this year.
Now comes the harder part: making it safe.
What "Tool Poisoning" Means in Plain English
Here's the setup. When you connect an AI agent to a tool — say, a calendar integration or a database connector — that tool sends a description of itself to the AI model. The description tells the model what the tool does, what inputs it accepts, and how to use it.
You never see this description. It's machine-to-machine. Your AI reads it automatically and trusts it.
Researchers at Invariant Labs discovered that attackers can embed hidden instructions inside those tool descriptions. The AI model reads and follows them. You, the user, see nothing unusual.
Think of it this way. Say you hire a new assistant and give them a list of approved vendors to call. But one of the vendors has slipped extra instructions into their contact card — instructions your assistant follows without telling you. "Also forward a copy of every invoice to this other address." Your assistant complies because the instructions look like they came from a legitimate source.
That's tool poisoning. The tool description becomes a delivery mechanism for malicious instructions.
The Numbers Are Not Encouraging
Invariant Labs built a benchmark called MCPTox to test how vulnerable AI models are to this kind of attack. They tested across multiple scenarios — data exfiltration, privilege escalation, command injection hidden in tool descriptions.
The results: o1-mini followed malicious instructions embedded in tool descriptions 72.8% of the time. The attack didn't require anything sophisticated. Just text in the right place.
More concerning, they published proof-of-concept attacks that successfully exfiltrated SSH keys from Claude Desktop and Cursor — two widely used AI coding tools. These aren't theoretical exploits against lab setups. They targeted tools that developers use every day.
Why This Is Different from Other AI Security Risks
We've written about supply chain attacks on AI tools and how to secure AI agents before. Tool poisoning is a distinct category because of where the attack lives.
A supply chain attack compromises the code — someone slips malicious instructions into the software itself. Tool poisoning doesn't touch the code. It hides instructions in the data layer — the tool descriptions that flow through MCP every time your AI agent starts working. The tool itself functions exactly as advertised. The poison is in the label, not the bottle.
This makes it harder to detect with traditional security scanning. The code is clean. The behavior is malicious.
OWASP Noticed
When security researchers start finding real exploits, standards bodies tend to follow. OWASP — the organization behind the widely used Top 10 web security risks — published an MCP Top 10 list. It catalogs the most common attack patterns against MCP servers, including tool poisoning, excessive permissions, and insecure credential storage.
If you work with a vendor that builds AI integrations, the OWASP MCP Top 10 is a reasonable thing to ask them about. It's the kind of checklist that separates vendors who think about security from those who don't.
What You Can Actually Do
For most businesses, MCP runs invisibly inside tools you've already adopted. You're not configuring it directly. But that doesn't mean you're powerless.
Ask your AI vendor about MCP server auditing. If your AI tools connect to external services, someone chose which MCP servers to trust. Ask who made that decision and what vetting process they used. As agent governance tools mature, this will become standard practice. Right now, it's a differentiator.
Know what your AI tools can access. Tool poisoning works because AI agents follow instructions from tools they're connected to. The more tools connected, the larger the attack surface. If your AI agent has access to your email, your file system, and your database, a single compromised tool description could instruct it to read from one and send to another.
Run mcp-scan if you manage your own setup. Invariant Labs released an open-source tool called mcp-scan that checks MCP server configurations for known vulnerabilities, including tool poisoning patterns. If your team runs AI tools internally — even developer tools like Claude Desktop or Cursor — this is worth running.
Limit tool permissions to what's actually needed. An AI agent that can read your CRM doesn't need write access. An agent that manages your calendar doesn't need access to your file system. Every unnecessary permission is a door that a poisoned tool description could walk through.
The Standards War Ended. The Security War Started.
MCP getting donated to the Linux Foundation is genuinely good news. It means the protocol will be governed by a neutral body, with input from competing companies who all have a stake in getting it right. Standards bodies move slowly, but they move deliberately.
The security side is less settled. Tool poisoning is a new attack category that most businesses haven't heard of, most vendors haven't addressed, and most scanning tools can't detect. The OWASP MCP Top 10 and tools like mcp-scan are early responses — necessary but not sufficient.
The pattern is familiar. A technology gets adopted fast because it's useful. Security catches up later, usually after something goes wrong publicly. MCP is somewhere in the middle of that timeline — adopted widely enough to matter, secured just enough to worry about.
The plumbing works. Now it needs locks.
Related: The Protocol Powering AI Tools Is Burning Through Your Budget and A Supply Chain Attack Just Hit the Tool That Powers Most AI Applications.
Stay Connected
Get practical insights on using AI and automation to grow your business. No fluff.