Building OpenAI Agents with mcp.run tools via SSE
The OpenAI Agents SDK is a Python framework used for building systems that intelligently accomplish tasks. In this tutorial, we will build an agent that will research, write and publish a draft of a blog post as a Github Gist. This is a simple example, but could be extended in all kinds of ways using additional tools.
Pre-requisites
Before starting, ensure you have the following:
- uv installed on your system
- An OpenAI Account with API access
- An OpenAI API Key
- A GitHub Account for mcp.run authentication and for use with the Github servlet
Required Tools
- dylibso/fetch
- dylibso/github: Github will be used to upload a Gist when the blog post is complete.
Install the servlet by:
- Visiting each URL
- Clicking the
Install
button - Verifying they appear in your install profile
Generating the SSE URL
You will need an mcp.run account. When logged in, visit the settings page to generate an SSE URL.
Locate the button that says "Generate 'myusername/default' SSE" and click it, then copy the generated SSE URL.
Creating an OpenAI Agent
Since the OpenAI Agents Python SDK supports SSE directly you can just plug your generated URL in to the Agent and have all the tools installed to your profile immediately available to the LLM.
The Python code below is using agents.MCPServerSse
to connect to your
mcp.run SSE server, which will get passed in to the
agents.Agent
when it is initialized:
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "openai-agents",
# ]
# ///
import asyncio
import os
from agents import Agent, Runner
from agents.mcp import MCPServerSse
# SSE URL generated in the step above
DEFAULT_SSE_URL = os.environ["MCP_RUN_SSE_URL"]
# Create an MCP Server
def connect_sse(url: str, name: str):
return MCPServerSse(params={"url": url}, name=name)
async def agent(promt: str):
print("Connecting to mcp.run SSE server")
async with connect_sse(DEFAULT_SSE_URL, "mcp.run") as mcp_server:
print("Initializing agent")
agent = Agent(
name="Assistant",
instructions="Use tools to accomplish the task at hand.",
mcp_servers=[mcp_server],
)
print(f"Running: {prompt}")
result = await Runner.run(starting_agent=agent, input=prompt)
return result.final_output
Running the Agent
Once the Agent
code is set up, you can run it with the prompt of your choice.
In this case we will generate a blog post about MCP and save it to Github:
async def run():
prompt = """
Research and write a blog post about model context protocol.
- Use https://modelcontextprotocol.io/introduction as a starting point.
- Be curious, feel free to follow a few relevant links
- Make sure to cite sources.
- Edit the post to ensure it is high quality and interesting.
When you're done create a gist with the edited blog post
"""
# Run the agent
result = await agent(prompt)
# Print the result
print(result)
if __name__ == "__main__":
asyncio.run(run())
If this code is saved to a file agent.py
, it can be executed using uv
:
uv run ./agent.py
This example could easily be expanded to open a PR directly to your blog's
repository, post to Notion,
Slack or use
Perplexity's Sonar API to do
some additional research. Once you have your client of choice configured to use
mcp.run adding a new tool is just a matter of clicking
Install
on the servlet page.
Support
If you get stuck and need some help, please reach out! Visit our support page to learn how best to get in touch.