MCP is a new protocol that allows AI systems to connect to various applications and tools, enabling them to access specific data and perform tasks. This capability raises significant security concerns, particularly regarding how AI interacts with sensitive information and external systems.

This presentation was delivered at the ivision Security Symposium 2025. This post is based on a transcript of that presentation, and has been edited for clarity.


Presentation slide with "ivision" logo, titled "AI Gets a Library Card: Security Ramifications of the Model Context Protocol" by Brad Dixon.

Why in the world would AI need a library card?

AI is clever. However, it cannot do anything useful without the necessary information. Let’s say you’ve got ChatGPT or Anthropic’s Claude and you ask, Hey, how did my company do over the last four quarters? What are the best-performing products? It doesn’t know a thing about that. It cannot help you. If you’re a public company it might dig around the web and try to get an answer, but it will not be authoritative. It just doesn’t know.

Side-by-side comparison of ChatGPT and Lattice user interfaces, both labeled as Large Language Models (LLMs), with visible text boxes and examples of professional development goal generation.

For clarity, when I say AI I really mean the technology behind large language models, or LLMs. Public systems like ChatGPT use this, for example, but so do internal business tools.

At ivision we use Lattice for our mid-year reviews. I noticed it now includes a new AI feature. You can paste your self-review and click Make more professional. You can even add something like, “Make it more professional, but my boss Sarah only speaks Polish,” and the system will translate it into Polish. I found that amusing, though probably not useful for most of you.

You are seeing this AI capability appear in line-of-business applications and, of course, in more and more custom-built solutions. The goal is to create truly useful applications, and that is where MCP comes in.

Two side-by-side images: Left shows a vintage classroom with wood desks, chairs, a chalkboard, and various educational items labeled "Ambient Knowledge." Right depicts a modern library with labeled sections: Prompt, Conversational History, Tool Call and Response, User Message, illustrating "Context" in machine learning systems.

So a fundamental thing to think about is how AI knows anything. There are only two ways it does. The first is what it learns in the schoolhouse, the training. That training happens once for a release of a model, and it is based on the billions or trillions of documents that the developers hoover up from every source, legal or not, to train these models.

The next way is something you control, and that is the context. For each request you make to an AI, like in ChatGPT, each time you hit Enter and send that request or your application makes a request, a few pieces of information are provided.

First is the system prompt. These are instructions typically controlled by the application, telling the model how it should behave. Next is the conversational history, everything you have talked about so far in that thread with the AI. There are also tools that AI systems can call and get responses from, just like function calls. This is a big deal and the focus right here, how tools are going to be working. Finally, there is the user message, the part you put in.

All of this context information is used only for that one conversational turn and can be changed each time you go through. There is the ambient knowledge that comes from training, and there is the specific knowledge provided in context. If you want to make useful AI systems, it is all about what goes into the context of that request. That is where it happens. The models are shared by everybody, but the context is what makes your AI system actually useful.

What if someone dragged a financial document into ChatGPT and asked, How have my products been doing over the last four quarters? I built the same demo with a fake company and a fake document. Here is what happens behind the scenes. The PDF is uploaded and converted to text. The system then chops the text into pieces. Based on the question, it selects a few of those pieces and places them in the context. Next, the LLM gets to work. It even writes a small program to analyze the data, runs that program, looks at the result and finally drafts the answer. That raises concerns:

  • Why was the entire document copied into a public service?
  • Which chunks were chosen out of 500 pages?
  • Is the generated program any good?
  • And will the answer be consistent the next time I ask?

Clearly, this is not the right way to handle the question.

So how do you make a useful AI system that can answer questions like this, questions that require specific information about your company or systems? The way this is happening in 2025 and 2026 is through tools, and those tools are reached by AI through MCP, the Model Context Protocol.

When you give AI the right tools for the job, and this is the same question, just with an MCP tool I developed for the demo, you suddenly get the answers you want. In this example the AI has been given three or four different tools to query some fake financial data and it comes up with a genuinely useful answer.

If you are looking at this and thinking, this is what my business needs, or you have heard your vendors say the applications you use can now plug into AI and be used by AI, you might be asking how that actually happens.

Diagram titled 'MCP: Like USB, but for Connecting Applications.' MCP (Standardized protocol) is at the center with bidirectional data flows linking AI applications (e.g., Claude Desktop, Sire) and data sources/tools (e.g., PostgreSQL, Git).

The way that’s happening very quickly is through this new protocol called MCP, the Model Context Protocol. It is a way that applications can be connected to other applications. The primary use case, and where this is coming from, is all of the AI providers. This started with Anthropic; in November 2024 they released the spec. A couple of months later OpenAI picked it up and said, You know what, we like this too. With those two big names behind MCP, it has just exploded.

It is now the way these AI systems connect to all sorts of applications. If you go into ChatGPT you have a tab where you can enable connectors, and one of those connectors is your Google Workspace. That is MCP in action: ChatGPT is the client, the MCP server run by Google is the server, and these two applications talk to each other to give the AI the data and tools it needs to get work done.

This has exploded; it’s a really big deal. At the CIO and vendor level, when they start saying, AI is going to change this world, what they are really talking about, from a technical perspective, is AI with tools running in a loop to do a job, and the way that AI is going to get to these tools is through MCP.

Now, the security guy is like, I’m really curious about this.

Screenshot of a website titled 'Open-Source MCP servers' displaying a list of available servers with categories like Remote, Python, and Developer Tools. Includes search options, server stats, and matching MCP tool recommendations.

How does this work? What are the big problems with it? This is really important because there are literally thousands of MCP servers already. A website called glama.ai lists about 2,500 as of last week when I took that screenshot. These servers let you hook your AI up to just about anything, including the local file system of the computer it’s running on, the webcam, business applications, and developer tools. If the question is how to connect AI to something, this year the answer is MCP.

Infographic with text "MCP is More than Tools" and six icons labeled: Resources (book), Tools (gear and wrench), Prompts (speech bubble), Sampling (magic wand), Roots (plant), and Elicitation (light bulb).

MCP is about a lot more than just tools. It also gives resources to AI, like files, and can provide prompts. If I want to do a task, here’s how I prompt myself to get something started.

It can use the AI model from the tool, acting like a reverse query that comes to the tool, not from the person. It can also ask your users questions. I’m waiting for the first MCP-powered phishing campaign; it might happen.

Image with eight labeled icons: RPC, Authorization, Notifications, Streaming, Schema, Stateful, and Transport, questioning if these concepts are just REST APIs for LLMs. Title asks: 'Is this just REST APIs for LLMs?'"

You may be looking at this and saying, “Hey Brad, there have been REST APIs in my line-of-business applications for a long time.” That’s true, but this is different in a couple of ways.

At its core it is a remote procedure call interface. Authorization is required so the AI knows what information it is allowed to access when answering a question. One of the big differences is the transport. With REST APIs you think of the web, HTTP, proxies, DNS, and the ability to intercept and inspect queries. MCP is protocol agnostic. One protocol it supports, primarily for developer purposes, is a local process running on the same machine. So when I talk about hooking your AI up to your local webcam, that happens through a local process.

Anthropic provides an example MCP server that can read files from your file system, send those files to Anthropic to work with them, write them, or modify them. This has obvious security ramifications.

Image shows a diagram with a robot holding a book labeled 'Library.' It splits into three paths titled 'No MCP Plans,' 'Use MCP,' and 'Build with MCP,' illustrating decision-making options.

I’m going to talk about three paths for businesses. The first path is No, I don’t want that; I’m going back into my home business and stamping this out, not yet. The second path is the people who want to use MCP. The third path is the people who might want to build with it in the coming year.

Collage of software interfaces with text "Not Doing MCP, Go Away". Includes user interface images from settings, MCP Call Server, VS Code, and Claude Desktop. Emphasizes user control and local process capabilities.

Maybe you feel this is not ready for your company yet and you do not want to implement it. First, be aware that end users who can install software can enable it themselves. They can install Claude Desktop and use local or remote MCP servers.

Other applications, including line-of-business apps, will likely embed MCP clients and servers so they can connect to AI locally. This capability may seep into the applications you already use or plan to use.

Once a process has file system access, it can reach many resources. For example, OneDrive files synced locally or, on a Mac, iCal connected to your Exchange server. A user could ask Claude to find a meeting time with specific people, and it would query those calendars. That might be helpful, but it may not be what you intend.

Expect MCP clients to appear in line-of-business applications. Developers should note that VS Code added an MCP client a few months ago, and more are coming. Not all of them will be AI apps; AI demand is driving MCP adoption, but MCP is a useful integration point on its own, much like USB, which lets you connect all sorts of things.

A flowchart illustrating the strategy 'Discover and Confine' with MCP options. Branches include 'No MCP Plans' leading to 'Discover Usage' and 'Confine Usage', and 'Use MCP' branching to 'Build with MCP'. Label: 'Standard Anti-Shadow IT Operation'. Logo: "ivision".

If you’re not ready for this yet, you’ll need to be active about going out and finding where it’s being used, seeing which line-of-business applications are starting to offer these capabilities and which SaaS applications are. Make sure they don’t get turned on and used behind your back. You need a bit of a campaign; it’s a standard anti-shadow IT project.

A slide titled "Use 3rd Party MCP with Commercial AI" featuring a mind map and bullet points. The map outlines steps for using MCP with commercial AI, discussing server authorization, authentication, and security. Bullets emphasize distrust of LLMs, security, server control, and OAuth complexities.

Now, let’s take an alternate approach: we want to support our users. AI has been useful, and if we give our AI systems better information and tools, they can answer questions and help people get their work done. We like that.

We must always remember that the AI, the LLM, is not a trusted party from a security perspective. Most people do not grasp this. I have watched an eight-year-old use ChatGPT and she has not figured out it is not a person. Adults may know it is not a person, but may place too much trust in the AI. As security practitioners, we must accept that the LLM itself is untrustworthy, even when a trustworthy user interacts with it.

When you hook an LLM to systems containing data you want to keep confidential, you need to think carefully about the risks.

Even if your company only buys software, SaaS and line-of-business developers are rapidly adding MCP servers to their products and rediscovering well-known security vulnerabilities. I will show some examples of this.

A unique burden falls on the person who decides which MCP systems or connectors to enable in ChatGPT or Claude. You must assess security exposure based on the combination of tools. A single match is one thing; a match plus a half-gallon of gasoline is quite another. Combining different MCP servers can give an LLM far more capability than you expect.

Everyone will want these systems to work with their SSO. Because MCP relies on advanced OAuth features that few other systems use, I anticipate bugs. New code gives us new problems.

A presentation slide featuring three screenshots about MCP server security issues, including headlines from The Register, EscapeRoute, and Subbase. Text reads "Vet MCP Servers Carefully."

So let’s say you’re just using MCP services offered by your software vendors. They are making new mistakes. This is just in the last month: security faults in MCP servers implemented by commercial software companies.

The top left example is Asana. They decided to make an MCP server but, for whatever reason, allowed it to read data from every one of their tenants. That is not a feature of their web app and they certainly did not intend to do that. They just forgot on the MCP server. New code, new problems. They had this tragic authorization bug, had to pull it down, and fix it. I do not know how it got out the door.

The middle one, Anthropic’s MCP server, had a garden variety path traversal error. We have seen and largely stamped these problems out of web apps for decades, but new code means new problems.

The last one is Supabase. It is a ticketing system and they had a data exfiltration leak.

I will show you how this works in a slide coming up. As a security person you want to look at this new feature and be cautious. Companies have been stumbling over themselves to roll out AI features and say, Oh, it is like it was before but now it has AI and we are raising our prices 30%, too! They love that. In their haste to get to that 30% more they are making fundamental mistakes in application security. MCP is one of those areas. It has only been around since November, but they know it is happening and they are scrambling.

Be cautious. You can test MCP servers; they will be tested differently from web applications. If you are really going to use MCP to create a door to critical data or systems, even if you are not writing that code, put some effort into testing it. You might be the first one to do so.

A presentation slide titled "Not Your Mother's OAuth" shows a "New Connector" setup screen on the left and an OAuth process flowchart on the right, with terms like "User Agent" and "MCP Server."

I mentioned that this is not your mother’s OAuth. I won’t go into the details, but when you connect systems you are usually asked for a client ID and client secret. You preregister, fill out a form, get approved, and only then can your system call theirs.

For ChatGPT, Anthropic Claude, or other mega AI vendors, that process is a burden. They want their AI to pick up any available tool, use it, and get the necessary information. They need to authorize the user, not the system.

OAuth already includes a documented, standardized behavior called dynamic client registration. Hardly anyone has used it, but many will start now because of MCP. New code, new problems.

Diagram showing 'Architectural Problem: Lethal Trifecta' with a Venn diagram and a screenshot. Venn diagram highlights: Access to Private Data, Ability to Communicate Externally, Exposure to Untrusted Content. Screenshot depicts a malicious tool call example.

Here’s the key takeaway. People are always concerned about prompt injection. Prompt injection is a behavior, but what you really need to watch for is what Simon Willison calls the lethal trifecta:

  • access to private data
  • ability to communicate externally
  • exposure to untrusted content

When a vendor says, “We’ve got this new feature; you should enable it in your ChatGPT instance,” sit down and ask yourself, Does this tool bring together these three elements? If it does, you have a heightened risk.

This happened in the Supabase example. They had a ticketing system where anyone could submit tickets - untrusted content. They also had private data, because their MCP server exposed a tool to the LLM named execute_sql(). Right there you know you have a problem. Using that tool, someone could submit a ticket instructing the model to fetch private integration keys from the Supabase database and add them as a comment, so the attacker could read them later. That completes the third element: the ability to communicate externally.

There are ways to design MCP servers so these three elements never appear together. Supabase could have taken a different path, but it didn’t.

This is my key warning. Vendors will push MCP on you. It’s not just a flag you turn on and say, “Great, we have AI now.” Use caution, and always check whether the lethal trifecta is present when used in combination with the other MCP servers you have enabled. When it is, there is risk.

Flowchart and list on AI security. On left, MCP options include "No MCP Plans," "Use MCP," "Build with MCP," leading to "Always Public," "Always Untrusted," etc. On right, points on prompt injection and "lethal trifecta" Venn diagram showing "Access to Private Data," "Ability to Communicate Externally," "Exposure to Untrusted Content."

So, last little bit here. If you’re going to be building MCP systems, there’s a lot to be concerned about. Prompt injection is not a solved problem in 2025. It can be mitigated, thwarted, and monitored, but it is not solved, and bolt-on security will not fix it. Every vendor that claims to have solved prompt injection really hasn’t; they should say we have helped. That’s still valuable. WAFs in front of a web server are great; they don’t solve the underlying problems, but they can help manage them, and that’s useful. Mindful architecture, coupled with MCP, can be really effective.

Image shows a fictional library card for "GPT-42" with a robot illustration, humorously addressing AI and security. Text details MCP, which connects applications to AI, and provides contact information for AI security inquiries.

So that’s my last little bit. I wanted you all to be informed about something that’s coming at you like a freight train, and that’s MCP. It’s how AI will be able to use tools, and there are thousands of them coming out. New things give security people like me hives, because we know there will be bugs, so let’s go find them.

I also talked about three ways to get started in your analysis when you go back. If you have AI-related security questions, or just security questions in general, reach out to me and I’ll connect you with someone who can give you some insight.