Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
Yesterday, Microsoft CEO Satya Nadella announced a major reorganization focused
on AI platforms and tools, signaling the next phase of the AI revolution.
Reading between the lines of Microsoft's announcement and comparing it to the
emerging universal tools ecosystem, there are fascinating parallels that
highlight why standardized, portable AI tools are critical for enterprise
success.
Microsoft's reorganization announcement highlights the massive transformation
happening in enterprise software. The success of this transformation will depend
on having the right tools and platforms to implement these grand visions.
Universal tools provide the practical foundation needed to:
Safely adapt AI capabilities across different contexts
As we enter what Nadella calls "the next innings of this AI platform shift," the
role of universal tools becomes increasingly critical. They provide the
standardized, secure, and portable layer needed to implement ambitious AI
platform visions across different environments and use cases.
For enterprises looking to succeed in this AI transformation, investing in
universal tools and standardized approaches isn't just good practiceβit's
becoming essential for success.
We're working with companies and angencies looking to enrich AI applications
with tools -- if you're considering how agents play a role in your
infrastructure or business operations, don't hesitate to reach out!
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
Compile once, run anywhere? You bet! After our
mcp.run OpenAI integration and some
teasing, we're excited to
launch mcpx4j, our client library for the JVM ecosystem.
Built on the new ExtismChicory SDK, mcpx4j is a
lightweight library that leverages the
pure-Java Chicory Wasm runtime. Its simple design allows for seamless
integration with diverse AI frameworks across the mature JVM ecosystem.
To demonstrate this flexibility, we've prepared examples using popular
frameworks:
Spring AI
brings extensive model support; our examples focus on OpenAI and
Ollama modules, but the framework makes it easy to plug in a
model of your choice. Get started with our
complete tutorial.
LangChain4j
offers a wide range of model integrations. We showcase implementations with
OpenAI and Ollama, but you can easily adapt them to
work with your preferred model. Check out our
step-by-step guide to learn more.
One More Thing. mcpx4j doesn't just cross framework boundaries - it
crosses platforms too! Following our earlier
Android experiments, we're now
sharing our Android example with Gemini integration, along
with a complete step-by-step tutorial.
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
Although an open standard, MCP has primarily been in the domain of Anthropic
products. But what about OpenAI? Do we need to wait for them to add support? How
can we connect our tools to o3 when it releases this month?
Thanks to the simplicity and portability of mcp.run servlets, you don't need to
wait. Today we're announcing the availability of our
initial OpenAI support for
mcp.run.
We're starting with support for
OpenAI's node library, but we have more
coming right around the corner.
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
We hope you're having a great time with friends and family during these
holidays!
As previously discussed, WebAssembly is the foundation of this
technology. Every servlet you install on the mcpx server is powered by
a Wasm binary: mcpx fetches these binaries and executes commands at
the request of your preferred MCP Client.
This Wasm core is what enables mcpx to run on all major platforms from day
one. However, while mcpx is currently the primary consumer of the
mcp.run service, it's designed to be part of a much broader ecosystem.
In fact, while holiday celebrations were in full swing, we've been busy
developing something exciting!
Recently, we demonstrated how to integrate mcp.run's Wasm tools into a Java host
application. In the following examples, you can see mcp.run tools in action,
using the Google Maps API for directions:
You can now fetch any mcp.run tool with its configuration and connect it to
models supported by
Spring AI (See
demos on π and
π¦)
Similarly, you can connect any mcp.run tool to models supported by
LangChain4j, including
Jlama integration (See demos on
π and
π¦)
This goes beyond just connecting to a local mcpx instance (which works
seamlessly). Thanks to Chicory, we're running the Wasm binaries
directly within our applications!
With this capability to run MCP servlet tools via mcp.run locally in
our Java applications, we tackled an exciting challenge...
While external service calls are often necessary (like our demo's use of the
Google Maps API), AI is becoming increasingly personal and embedded in our daily
lives. As AI and agents migrate to our personal devices, the traditional model
of routing everything through internet services becomes less ideal. Consider
these scenarios:
Your banking app shouldn't need to send statements to a remote finance agent
Your health app shouldn't transmit personal records to external telehealth
agents
Personal data should remain personal
As local AI capabilities expand, we'll see more AI systems operating entirely
on-device, and their supporting tools must follow suit.
While this implementation is still in its early stages, it already demonstrates
impressive capabilities. The Wasm binary servlet runs seamlessly on-device, is
fully sandboxed (only granted access to Google Maps API), and executes quickly.
We're working to refine the experience and will share more developments soon.
We're excited to see what you will create with these tools! If you're
interested in exploring these early demos, please reach out!
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
"The notion that business applications exist, that's probably where they'll all collapse in the agent era."β
If you haven't seen this interview yet, it's well worth watching. Bill and Brad
have a knack for bringing out insightful perspectives from their guests, and
this episode is no exception.
Satya refers to how most SaaS products are fundamentally composed of two
elements: "business logic" and "data storage". To vastly oversimplify, most SaaS
architectures look like this:
Satya proposes that the upcoming wave of agents will not only eliminate the UI
(designed for human use, click-ops style), but will also move the "CRUD"
(Create - Read - Update - Delete) logic entirely to the LLM
layer. This shifts the paradigm to agents communicating directly with databases
or data services.
As someone who has built many systems that could be reduced to "SaaS", I believe
we still have significant runway where the CRUD layer remains separate from the
LLM layer. For instance, getting LLMs to reliably handle user authentication or
consistently execute precise domain-specific workflows requires substantial
context and specialization. While this logic will persist, certain layers will
inevitably collapse.
Satya's key insight is that many SaaS applications won't require human users for
the majority of operations. This raises a crucial question: in a system without
human users, what form does the user interface take?
However, this transition isn't automatic. We need a translation and management
layer between the API/DB and the agent using the software. While REST APIs and
GraphQL exist for agents to use, MCP addresses how these APIs are used. It
also manages how local code and libraries are accessed (e.g., math calculations,
data validation, regular expressions, and any code not called over the network).
MCP defines how intelligent machines interact with the CRUD layer. This approach
preserves deterministic code execution rather than relying on probabilistic
generative outputs for business logic.
If you're running a SaaS company and haven't considered how agent-based usage
will disrupt the next 1-3 years, here's where to start:
Reimagine your product as purely an API. What does this look like? Can
every operation be performed programmatically?
Optimize for machine comprehension and access. MCP translates your SaaS
app operations into standardized, machine-readable instructions.
Publishing a Servlet offers the most straightforward
path to achieve this.
Plan for exponential usage increases. Machines will interact with your
product orders of magnitude faster than human users. How will your
infrastructure handle this scale?
While the exact timeline remains uncertain, these preparations will position you
for the inevitable shift in how SaaS (and software generally) is consumed in an
agent-driven world. The challenge of scaling and designing effective
machine-to-machine interfaces is exciting and will force us to think
differently.
There's significant advantage in preparing early for agent-based consumers. Just
as presence in the Apple App Store during the late 2000s provided an adoption
boost, we're approaching another such opportunity.
In a world where we're not delivering human-operated UIs, and APIs aren't solely
for programmer integration, what are we delivering?
If today's UI is designed for humans, and tomorrow's UI becomes the agent, the
architecture evolves to this:
MCP provides the essential translation layer for cross-system software
integration. The protocol standardizes how AI-enabled applications or agents
interact with any software system.
Your SaaS remains viable as long as it can interface with an MCP Client.β
Implementation requires developing your application as an MCP Server. While
several approaches exist,
developing and publishing an MCP Servlet
offers the most efficient, secure, and portable solution.
As an MCP Server, you can respond to client queries that guide the use of your
software. For instance, agents utilize "function calling" or "tool use" to
interact with external code or APIs. MCP defines the client-server messages that
list available tools. This tool list enables clients to make specific calls with
well-defined input parameters derived from context or user prompts.
A Tool follows this structure:
{ name: string; // Unique identifier for the tool description?: string; // Human-readable description inputSchema:{// JSON Schema for the tool's parameters type:"object", properties:{ ... }// Tool-specific parameters } }
For example, a Tool enabling agents to create GitHub issues might look like
this:
With this specification, AI-enabled applications or agents can programmatically
construct tool-calling messages with the required title, body, and labels
for the github_create_issue tool, submitting requests to the MCP
Server-implemented GitHub interface.
Hundreds of applications and systems are already implementing MCP delivery,
showing promising adoption. While we have far to go, Satya isn't describing a
distant futureβthis transformation is happening now.
Just as we "dockerized" applications for cloud migration, implementing MCP will
preserve SaaS through the App-ocalypse.
The sooner, the better.
If you're interested in MCP and want to learn more about bringing your software
to agent-based usage, please reach out. Alternatively, start now by implementing
access to your SaaS/library/executable through publishing to
mcp.run.
Looking for secure MCP controls to connect AI and your work apps? Check out
Turbo MCP, our self-hosted enterprise tool management solution:
https://dylibso.ai/#products
TL;DR
Announcing the extensible MCP Server: mcpx & mcp.run: its "app store" & registry for servlets. Search, install & manage secure & portable tools for AI, wherever it goes - desktop, mobile, edge, server, etc.
A few weeks ago, Anthropic
announced the Model
Context Protocol (MCP). They describe it as:
[...] a new standard for connecting AI assistants to the systems where data
lives, including content repositories, business tools, and development
environments.
While this is an accurate depiction of its utility, I feel that it significantly
undersells what there is yet to come from MCP and its implementers.
In my view, what Docker (containers) did to the world of cloud computing, MCP
will do to the world of AI-enabled systems.
Both Docker and MCP provide machines with a standard way to encapsulate code,
and instructions about how to run it. The point where these clearly diverge,
(aside from being packaging technology vs. a protocol) is that AI applications
are already finding their way into many more environments than where containers
are optimal software packages.
AI deployment diversity has already surpassed that of the cloud. MCP gives us a
way to deliver and integrate our software with AI systems everywhere!
AI applications, agents, and everything in-between need deterministic
execution in order to achieve enriched capabilities beyond probabilistic outputs
from today's models. A programmer can empower a model with deterministic
execution by creating tools and supplying them to the model.
If this concept is unfamiliar, please refer to Anthropic's overview in this guide.
So, part of what we're announcing today is the concept of a portable Wasm
servlet, an executable code artifact that is dynamically & securely installed
into an MCP Server. These Wasm servlets intercept MCP Server calls and enable
you to pack tons of new tools into a single MCP Server.
More on this below, but briefly: Wasm servlets are loaded into our extensible
MCP Server: mcpx, and are managed by the
corresponding registry and control plane: mcp.run.
If
anyofthepredictions prove true
about the impact of AI on software, then it is reasonable to expect a multitude
of software to implement at least one side of this new protocol.
Since the announcement, developers around the world have created and implemented
MCP servers and clients at an astonishing pace; from simple calculators and web
scrapers, to full integrations for platforms like
Cloudflare and
Browserbase, and to
data sources like
Obsidian and
Notion.
Anthropic certainly had its flagship product, Claude Desktop, in mind as an MCP
Client implementation, a beneficiary to access these new capabilities connecting
it to the outside world. But, to go beyond their own interest, opening up the
protocol has paved the way for many other MCP Client implementations to also
leverage all the same Server implementations and share these incredible new
capabilities.
So, whether you use Claude Desktop, Sourcegraph Cody, Continue.dev, or any other
AI application implementing MCP, you can install an MCP Server and start working
with these MCP tools from the comfort of a chat, IDE, etc.
Want to manage your Cloudflare Workers and Databases?
β Install the Cloudflare MCP Server.
Want to automate browsing the web from Claude?
β Install the Browserbase MCP Server.
Want to use your latest notes from your meetings and summarize a follow-up
email?
β Install the Obsidian MCP Server.
Exciting as this is, there is a bit of a low ceiling to hit when every new tool
is an additional full-fledged MCP Server to download and spawn.
Since every one of these MCP Servers is a standalone executable with complete
system access on your precious local machine, security and resource management
alarms should be sounding very loudly.
As the sprawl continues, every bit of code, every library, app, database and API
will have an MCP Server implementation. Executables may work when n is small,
you can keep track of what you've installed, you can update them, you can
observe them, you can review their code. But when n grows to something big,
10, 50, 100, 1000; what happens then?
The appetite for these tools is only going to increase, and at some point soon,
things are going to get messy.
Today we're excited to share two new pieces of this MCP puzzle: mcpx(the
extensible, dynamically updatable MCP Server) and
mcp.run(a corresponding control plane and registry)
for MCP-compatible "servlets" (executable code that can be loaded into mcpx).
Together, these provide a secure, portable means of tool use, leveraging MCP
to remain open and broadly accessible. If you build your MCP Server as a
"servlet" on mcp.run, it will be usable in the most
contexts possible.
All mcpx servlets are actually WebAssembly modules under the hood. This means
that they can run on any platform, on any operating system, processor, web
browser, or device. How long will it be until the first MCP Client application
is running on a mobile phone? At that point your native MCP Server
implementation becomes far less useful.
Can we call these tools via HTTP APIs? Yes, the protocol already specifies a
transport to call MCP Servers over a network. But it's not implemented in Claude
Desktop or any MCP Client I've come across. For now, you will likely be using
the local transport, where both the MCP Client and Server are on your own
device.
Installed servlets are ready to be called over the network. Soon you'll be
able to call any servlet using that transport in addition to downloading &
executing it locally.
One major differentiator and benefit to choosing mcp.run
and targeting your MCP servers to mcpx servlets is portability. AI
applications are going to live in every corner of the world, in all the systems
we interact with today. The tools these apps make calls to must be able to run
wherever they are needed - in many cases, fully local to the model or other core
AI application.
If you're working on an MCP Server, ask yourself if your current implementation
can easily run inside a database? In a browser? In a web app? In a Cloudflare
Worker? On an IoT device? On a mobile phone? mcp.run
servlets can!
We're not far from seeing models and AI applications run in all those places
too.
By publishing MCP servlets, you are future-proofing your work and ensuring
that wherever AI goes, your tools can too.
To solve the sprawling MCP Server problem, mcpx is instead a dynamic,
re-programmable server. You install it once, and then via
mcp.run you can install new tools without ever touching
the client configuration again.
We're calling the tools/prompts/resources (as defined by the protocol),
"servlets" which are managed and executed by mcpx. Any servlet installed to
mcpx is immediately available to use by any MCP Client, and can even be
discovered dynamically at runtime by a MCP Client.
You can think of mcpx kind of like npm or pip, and
mcp.run as the registry and control plane to manage your
servlets.
We all like to share, right? To share servlets, we need a place to keep them.
mcp.run is a publishing destination for your MCP
servlets. Today, your servlets are public, but soon we will have the ability to
selectively assign access or keep them private at your discretion.
As mentioned above, currently servlets are installed locally and executed by a
client on your machine. In the future, we plan on enabling servlets to run in
more environments, such as expanding mcp.run to act as a
serverless environment to remotely execute your tools and return the results
over HTTP.
You may even be able to call them yourself, outside the context of an MCP Client
as a webhook or general HTTP endpoint!
Last week, mcpx and mcp.run won Anthropic's MCP
Hackathon in San Francisco! This signaled to our team that we should add the
remaining polish and stability to take these components into production and
share them with you.
Today, we're inviting everyone to join us. So, please head to the
Quickstart page for instructions on how to install mcpx and start
installing and publishing servlets!
get Claude to search for a tool in the registry (it will realize it needs
new tools on its own!)
install and configure a tool on mcp.run, then call
it from Claude (no new MCP Server needed)
publish a servlet, compiling your library or app
to WebAssembly (reach out if you need help!)
As things are still very early, we expect you to hit rough edges here and there.
Things are pretty streamlined, but please reach out if you run into
anything too weird. Your feedback (good and bad) is welcomed and appreciated.
We're very excited about how MCP is going to impact software integration, and
want to make it as widely adopted and supported as possible -- if you're
interested in implementing MCP and need help, please reach out.