A recent article by Invariant Labs has shown that MCP-based
agent frameworks can be vulnerable to "tool poisoning"
attacks. In a tool-poisoning attack a malicious server is able to hijack tool
calls. These hijacked calls can read sensitive data or execute arbitrary
commands on the host machine without notifying the user. This is a serious
security concern, as it can lead to unauthorized access to sensitive
information, potential lateral movement within systems, and other nefarious
activities.
Many of the most exciting MCP demos are all about controlling local
applications, such as web browsers, 3D modeling software, or video editors.
These demos show how an AI agent can interact with these applications in a
natural way, using the same tools that a human would use. This is a powerful
capability, but it also raises serious security concerns.
How do we balance the power of MCP with the need for security?
Servlets: Still MCP, Just Lighter and More Secureβ
On a first glance, mcp.run might look like an MCP Server marketplace. But
mcp.run does not just provide a marketplace for arbitrary MCP servers. Instead,
it provides a curated set of lightweight "servlets" that are designed to run
in a secure environment.
We call them "servlets" to emphasize they don't run as full-fledged MCP
servers on the user's machine. Instead, they share a host process service
that runs them in an isolated environment with limited access to the
host machine's resources. This is usually called a "sandbox".
Servlets do not share data with each other, and the data that is shared with
the host process is limited to the specific pieces of information that are
needed for the servlet to work. This means that even if one servlet is
compromised, it cannot just access the data or the resources of another servlet:
these will be still mediated by the permissions that the other servlet has
been granted. This is a key security feature of mcp.run, and it greatly
mitigates "tool poisoning" attacks.
Explicit Consentβ

Each servlet is given explicit access to a specific set of resources and
capabilities, which are defined in their configuration. This means that they
are only given access to resources and capabilities that they have explicitly
declared. For example:
- The
filesystem
servlet can access only a file system portions that it
explicitly requested; it cannot read files outside the given boundaries; and
it cannot access the Internet.
- The
brave-search
servlet can access the Brave Search API, but it cannot
access any other web resources.
- The
fetch
servlet is only able to retrieve the contents of a web site.
- The
eval-py
servlet evaluates Python code but it can neither access the
Internet nor the file system.
All of the capabilities that a Servlet requires must be explicitly granted
upon installation, and they are not allowed otherwise. The installation page
for a servlet displays the full list of capabilities that the servlet will be
granted, and the user must explicitly accept them before the servlet can
be installed; in some cases, they can even be further restricted if the user
decides so.
More Safeguardsβ
It is also very common for an MCP server to read configuration parameters
from a file or from environment variables; this includes sensitive information
such as passwords or API keys. But mcp.run servlets are instead configured
through a standardized interface, and the credentials are stored in a secure
way. Thus, there is no need to store sensitive information in configuration
files or environment variables. In fact, servlets cannot access environment
variables in any way.
Moreover, servlets cannot access other sensitive system resources such as
the clipboard; and even when the servlet is granted access to the file
system, it is limited to a specific directory that is defined in the
configuration. This means no servlet has unrestricted access to sensitive files
or directories on the host machine, such as your ~/.cursor/mcp.json
or your
SSH keys, without explicit user consent.
API Integration: Servlets as a Security Boundaryβ
A reason for MCP's success is the protocol is simple at its core, making it easy
to write your own MCP server.
We believe that in the future, many organizations will want to implement their own MCP servers:
- you will have to write your own server when you want to plug your own
APIs;
- you might want to orchestrate articulate API flows and avoid filling the
context window with irrelevant data;
- but, most importantly, and yet often overlooked, you will need to
write your own MCP server if you want to retain full-control over the
content surface that will "leak" into the token stream.
Because of how LLMs currently work, tool calls require giving your AI service
provider unencrypted access to sensitive data. Even when tool calls are
performed locally, unless you run LLMs on-premises, a third-party service is
given access to all exchanged data.
While writing an MCP server is relatively easy, creating an mcp.run servlet
is even easier. Servlets run in a shared but secure host environment that
implements the protocol and maintains a stable connection. You only need to
implement the logic of your tools, and the host handles the remaining
details.
Writing a servlet is easy and fun; it allows you to retain control over your
data, and it brings also performance benefits: controlling the amount of
data that is returned into the token stream, allows to make sure that the the AI
service is not overwhelmed and can focus on relevant information.
You can write your servlets in multiple programming languages: TypeScript,
Python, Go, Rust, C++, Zig, and you can even bring your own.
WebAssembly: A Portable Sandboxβ
"A sandboxed environment where you can run custom code, I know this: this is a
Container!" you might think. But it is not! mcp.run servlets run on a
WebAssembly runtime; in other words, they are running in a lightweight,
portable virtual machine with efficient sandboxing.
This also means that they do not need to run on your local machine. You can
fetch them from your other devices, such as phones or tablets, and run them
even there, processing data locally and only sending the results to the AI
service. You could even run them in a browser, without the need to install
any software.
Finally, you can offload execution to our servers for lighter-weight
processing; but, if you choose so, you can also run them on premises and
process data in a secure environment you trust.
Speaking of secure environments you can trust, have you checked out
mcp.run Tasks?
Conclusionβ
We look forward to see what you will build with mcp.run! If you want to learn
more about mcp.run and how it can help bringing practical, secure and customized
AI automation to your organization, get in touch!