Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
We are thrilled to announce the release of a new version of mcpx4j,
the Java connector and runtime for MCP.run servletsβnow with native Android
support!
This new version depends on the latest Extism Chicory SDK and
most importantly, it includes support for a brand new experimental backend of
the Chicory compiler targeting Android.
If you are familiar with the Android platform, you know that Android uses Dalvik
bytecode (DEX) instead of standard Java bytecode. Until now, this meant that
dynamic loading of Wasm binaries on Android was only possible through Chicory's
interpreter. With this new backend, we can now compile Wasm binaries directly to
Dalvik bytecode at runtime, allowing us to run them natively on Android devices!
Follow the instructions in the Android Gemini README to
get started with the new mcpx4j version and the Android backend of the Chicory
compiler. The new backend can also be used with the
Extism Chicory SDK, and even with vanilla Chicory if you want to
experiment with it.
If you want to learn more about how all of this works,
head over to evacchi.dev
where you'll find a detailed write-up on the technical challenges we faced and
how we overcame them.
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
We are excited to release βThe MCP Course You Needβ with
Kubesimplify! This is a detailed course starting from the basics
of the Model-Context Protocol, and then going deep into a step-by-step guide of
how to interact and build your very own MCP Servers, with a big focus on
MCP.run!
The course is available for free on YouTube, and it is a great way
to get started with MCP and learn how to write and extend your own agents and
chat assistants. The course is designed for both beginners and experienced
developers, and it covers a wide range of topics, including:
The basics of the Model-Context Protocol
How to plug MCP servers into a chat assistant
How to use MCP.run to build and deploy your tasks
How to use MCP.run to build and deploy your own tools
How to build your own MCP servlets on MCP.run
and it includes hands-on examples and exercises to help you learn by doing; we
will see how to interact with a Kubernetes cluster through a chat assistant, how
to build interactive chat bots on Telegram, and more!
This course is a great opportunity to learn about the Model-Context Protocol and
how to use it to build powerful and flexible AI applications. We hope you will
enjoy it!
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
A recent article by Invariant Labs has shown that MCP-based
agent frameworks can be vulnerable to "tool poisoning"
attacks. In a tool-poisoning attack a malicious server is able to hijack tool
calls. These hijacked calls can read sensitive data or execute arbitrary
commands on the host machine without notifying the user. This is a serious
security concern, as it can lead to unauthorized access to sensitive
information, potential lateral movement within systems, and other nefarious
activities.
Many of the most exciting MCP demos are all about controlling local
applications, such as web browsers, 3D modeling software, or video editors.
These demos show how an AI agent can interact with these applications in a
natural way, using the same tools that a human would use. This is a powerful
capability, but it also raises serious security concerns.
How do we balance the power of MCP with the need for security?
Servlets: Still MCP, Just Lighter and More Secureβ
On a first glance, mcp.run might look like an MCP Server marketplace. But
mcp.run does not just provide a marketplace for arbitrary MCP servers. Instead,
it provides a curated set of lightweight "servlets" that are designed to run
in a secure environment.
We call them "servlets" to emphasize they don't run as full-fledged MCP
servers on the user's machine. Instead, they share a host process service
that runs them in an isolated environment with limited access to the
host machine's resources. This is usually called a "sandbox".
Servlets do not share data with each other, and the data that is shared with
the host process is limited to the specific pieces of information that are
needed for the servlet to work. This means that even if one servlet is
compromised, it cannot just access the data or the resources of another servlet:
these will be still mediated by the permissions that the other servlet has been
granted. This is a key security feature of mcp.run, and it greatly mitigates
"tool poisoning" attacks.
Each servlet is given explicit access to a specific set of resources and
capabilities, which are defined in their configuration. This means that they
are only given access to resources and capabilities that they have explicitly
declared. For example:
The filesystem servlet can access only a file system portions that it
explicitly requested; it cannot read files outside the given boundaries; and
it cannot access the Internet.
The brave-search servlet can access the Brave Search API, but it cannot
access any other web resources.
The fetch servlet is only able to retrieve the contents of a web site.
The eval-py servlet evaluates Python code but it can neither access the
Internet nor the file system.
All of the capabilities that a Servlet requires must be explicitly granted
upon installation, and they are not allowed otherwise. The installation page
for a servlet displays the full list of capabilities that the servlet will be
granted, and the user must explicitly accept them before the servlet can
be installed; in some cases, they can even be further restricted if the user
decides so.
It is also very common for an MCP server to read configuration parameters
from a file or from environment variables; this includes sensitive information
such as passwords or API keys. But mcp.run servlets are instead configured
through a standardized interface, and the credentials are stored in a secure
way. Thus, there is no need to store sensitive information in configuration
files or environment variables. In fact, servlets cannot access environment
variables in any way.
Moreover, servlets cannot access other sensitive system resources such as
the clipboard; and even when the servlet is granted access to the file
system, it is limited to a specific directory that is defined in the
configuration. This means no servlet has unrestricted access to sensitive files
or directories on the host machine, such as your ~/.cursor/mcp.json or your
SSH keys, without explicit user consent.
API Integration: Servlets as a Security Boundaryβ
you will have to write your own server when you want to plug your own
APIs;
you might want to orchestrate articulate API flows and avoid filling the
context window with irrelevant data;
but, most importantly, and yet often overlooked, you will need to
write your own MCP server if you want to retain full-control over the
content surface that will "leak" into the token stream.
Because of how LLMs currently work, tool calls require giving your AI service
provider unencrypted access to sensitive data. Even when tool calls are
performed locally, unless you run LLMs on-premises, a third-party service is
given access to all exchanged data.
While writing an MCP server is relatively easy, creating an mcp.run servlet
is even easier. Servlets run in a shared but secure host environment that
implements the protocol and maintains a stable connection. You only need to
implement the logic of your tools, and the host handles the remaining
details.
Writing a servlet is easy and fun; it allows you to retain control over your
data, and it brings also performance benefits: controlling the amount of
data that is returned into the token stream, allows to make sure that the the AI
service is not overwhelmed and can focus on relevant information.
You can write your servlets in multiple programming languages: TypeScript,
Python, Go, Rust, C++, Zig, and you can even bring your own.
"A sandboxed environment where you can run custom code, I know this: this is a
Container!" you might think. But it is not! mcp.run servlets run on a
WebAssembly runtime; in other words, they are running in a lightweight,
portable virtual machine with efficient sandboxing.
This also means that they do not need to run on your local machine. You can
fetch them from your other devices, such as phones or tablets, and run them
even there, processing data locally and only sending the results to the AI
service. You could even run them in a browser, without the need to install
any software.
Finally, you can offload execution to our servers for lighter-weight
processing; but, if you choose so, you can also run them on premises and
process data in a secure environment you trust.
Speaking of secure environments you can trust, have you checked out
mcp.run Tasks?
We look forward to see what you will build with mcp.run! If you want to learn
more about mcp.run and how it can help bringing practical, secure and customized
AI automation to your organization, get in touch!
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
Compile once, run anywhere? You bet! After our
mcp.run OpenAI integration and some
teasing, we're excited to
launch mcpx4j, our client library for the JVM ecosystem.
Built on the new ExtismChicory SDK, mcpx4j is a
lightweight library that leverages the
pure-Java Chicory Wasm runtime. Its simple design allows for seamless
integration with diverse AI frameworks across the mature JVM ecosystem.
To demonstrate this flexibility, we've prepared examples using popular
frameworks:
Spring AI
brings extensive model support; our examples focus on OpenAI and
Ollama modules, but the framework makes it easy to plug in a
model of your choice. Get started with our
complete tutorial.
LangChain4j
offers a wide range of model integrations. We showcase implementations with
OpenAI and Ollama, but you can easily adapt them to
work with your preferred model. Check out our
step-by-step guide to learn more.
One More Thing. mcpx4j doesn't just cross framework boundaries - it
crosses platforms too! Following our earlier
Android experiments, we're now
sharing our Android example with Gemini integration, along
with a complete step-by-step tutorial.
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
We hope you're having a great time with friends and family during these
holidays!
As previously discussed, WebAssembly is the foundation of this
technology. Every servlet you install on the mcpx server is powered by
a Wasm binary: mcpx fetches these binaries and executes commands at
the request of your preferred MCP Client.
This Wasm core is what enables mcpx to run on all major platforms from day
one. However, while mcpx is currently the primary consumer of the
mcp.run service, it's designed to be part of a much broader ecosystem.
In fact, while holiday celebrations were in full swing, we've been busy
developing something exciting!
Recently, we demonstrated how to integrate mcp.run's Wasm tools into a Java host
application. In the following examples, you can see mcp.run tools in action,
using the Google Maps API for directions:
You can now fetch any mcp.run tool with its configuration and connect it to
models supported by
Spring AI (See
demos on π and
π¦)
Similarly, you can connect any mcp.run tool to models supported by
LangChain4j, including
Jlama integration (See demos on
π and
π¦)
This goes beyond just connecting to a local mcpx instance (which works
seamlessly). Thanks to Chicory, we're running the Wasm binaries
directly within our applications!
With this capability to run MCP servlet tools via mcp.run locally in
our Java applications, we tackled an exciting challenge...
While external service calls are often necessary (like our demo's use of the
Google Maps API), AI is becoming increasingly personal and embedded in our daily
lives. As AI and agents migrate to our personal devices, the traditional model
of routing everything through internet services becomes less ideal. Consider
these scenarios:
Your banking app shouldn't need to send statements to a remote finance agent
Your health app shouldn't transmit personal records to external telehealth
agents
Personal data should remain personal
As local AI capabilities expand, we'll see more AI systems operating entirely
on-device, and their supporting tools must follow suit.
While this implementation is still in its early stages, it already demonstrates
impressive capabilities. The Wasm binary servlet runs seamlessly on-device, is
fully sandboxed (only granted access to Google Maps API), and executes quickly.
We're working to refine the experience and will share more developments soon.
We're excited to see what you will create with these tools! If you're
interested in exploring these early demos, please reach out!