Skip to main content

Programmatic Access

Use mcp.run's OpenAI-compatible API in your code.

Endpoint

https://mcp.run/api/v1/openai/{scope}/{profile}

Authentication

Authorization: Bearer YOUR_API_TOKEN

cURL

# Chat completion
curl -X POST "https://mcp.run/api/v1/openai/your-scope/your-profile/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{
"model": "your-model",
"messages": [{"role": "user", "content": "Hello"}]
}'

# List models
curl "https://mcp.run/api/v1/openai/your-scope/your-profile/models" \
-H "Authorization: Bearer YOUR_API_TOKEN"

Python

pip install openai
import openai

client = openai.OpenAI(
api_key="YOUR_API_TOKEN",
base_url="https://mcp.run/api/v1/openai/your-scope/your-profile"
)

response = client.chat.completions.create(
model="your-model",
messages=[{"role": "user", "content": "Hello"}]
)

print(response.choices[0].message.content)

Streaming

stream = client.chat.completions.create(
model="your-model",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)

for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end='')

JavaScript/TypeScript

npm install openai
import OpenAI from 'openai';

const client = new OpenAI({
apiKey: 'YOUR_API_TOKEN',
baseURL: 'https://mcp.run/api/v1/openai/your-scope/your-profile'
});

const response = await client.chat.completions.create({
model: 'your-model',
messages: [{ role: 'user', content: 'Hello' }]
});

console.log(response.choices[0].message.content);

Streaming

const stream = await client.chat.completions.create({
model: 'your-model',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true
});

for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Parameters

  • model - Model ID (required)
  • messages - Message array (required)
  • temperature - 0-2 (default: 0.7)
  • max_tokens - Maximum response length
  • stream - Enable streaming
  • tools - "auto" to use servlets

Tool Usage

Your servlets are automatically available:

response = client.chat.completions.create(
model="your-model",
messages=[{"role": "user", "content": "What's the weather in NYC?"}],
tools="auto"
)

# Check if tools were called
if response.choices[0].message.tool_calls:
print("Tools were used")

Error Handling

try:
response = client.chat.completions.create(...)
except openai.AuthenticationError:
print("Invalid token")
except openai.RateLimitError:
print("Rate limited")

Common Issues

  • 401: Invalid token
  • 404: Wrong scope/profile