Skip to main content
RapidDev - Software Development Agency
mcp-tutorial

How to use MCP with OpenAI Agents SDK

The OpenAI Agents SDK added MCP support, letting you connect any MCP server as a tool provider for OpenAI-powered agents. Use the MCPServerStdio or MCPServerStreamableHttp class to connect to servers, then pass them to your Agent definition. The SDK converts MCP tools into OpenAI function calling format automatically. This lets you use the same MCP servers across OpenAI agents, Claude Desktop, Cursor, and other hosts.

What you'll learn

  • How to connect MCP servers to OpenAI Agents SDK
  • How to use MCPServerStdio for local servers and MCPServerStreamableHttp for remote ones
  • How MCP tools are converted to OpenAI function calling format
  • How to build an agent that uses MCP tools alongside native OpenAI tools
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate7 min read15 minOpenAI Agents SDK (Python), any MCP serverMarch 2026RapidDev Engineering Team
TL;DR

The OpenAI Agents SDK added MCP support, letting you connect any MCP server as a tool provider for OpenAI-powered agents. Use the MCPServerStdio or MCPServerStreamableHttp class to connect to servers, then pass them to your Agent definition. The SDK converts MCP tools into OpenAI function calling format automatically. This lets you use the same MCP servers across OpenAI agents, Claude Desktop, Cursor, and other hosts.

Connect MCP Servers to OpenAI Agents

OpenAI's Agents SDK includes built-in MCP client support, bridging the gap between the MCP ecosystem and OpenAI models. You can connect any MCP server to an OpenAI agent, and the SDK automatically converts MCP tools into OpenAI's function calling format. This means the same MCP server you use with Claude Desktop or Cursor also works with GPT-4o-powered agents. This tutorial covers both local (stdio) and remote (HTTP) server connections.

Prerequisites

  • Python 3.10+ installed
  • OpenAI Agents SDK installed (pip install openai-agents)
  • An OpenAI API key
  • An MCP server to connect (npm package, local server, or Docker)

Step-by-step guide

1

Install the OpenAI Agents SDK with MCP support

Install the OpenAI Agents SDK which includes MCP client classes. The SDK provides MCPServerStdio for local stdio servers and MCPServerStreamableHttp for remote HTTP servers. Also install the MCP Python SDK if you want to build servers.

typescript
1pip install openai-agents
2
3# Optional: MCP Python SDK for building servers
4pip install "mcp[cli]"

Expected result: The openai-agents package is installed with MCP support included.

2

Connect a local MCP server using MCPServerStdio

Use MCPServerStdio to connect to a local MCP server that communicates via stdio. Pass the command and arguments as a list. The SDK launches the server process, performs the MCP handshake, and discovers available tools. Use it as an async context manager to ensure proper cleanup.

typescript
1from agents.mcp import MCPServerStdio
2
3# Create the server connection
4async with MCPServerStdio(
5 command="npx",
6 args=["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
7) as server:
8 # server is now connected and tools are discovered
9 tools = await server.list_tools()
10 print(f"Available tools: {[t.name for t in tools]}")

Expected result: The MCP server is connected and its tools are discovered.

3

Create an agent that uses MCP tools

Pass the connected MCP server to an Agent definition using the mcp_servers parameter. The SDK converts MCP tools to OpenAI function calling format automatically. The agent can then use these tools during its reasoning process just like native function calls.

typescript
1from agents import Agent, Runner
2from agents.mcp import MCPServerStdio
3
4async def main():
5 async with MCPServerStdio(
6 command="npx",
7 args=["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
8 ) as filesystem_server:
9
10 agent = Agent(
11 name="file-assistant",
12 instructions="You help users explore and understand their project files.",
13 mcp_servers=[filesystem_server],
14 )
15
16 result = await Runner.run(
17 agent,
18 "What TypeScript files are in the src directory?",
19 )
20 print(result.final_output)
21
22import asyncio
23asyncio.run(main())

Expected result: The agent uses the filesystem MCP server to list TypeScript files and returns a natural language answer.

4

Connect a remote MCP server via HTTP

For remote MCP servers running on another machine or cloud, use MCPServerStreamableHttp. This connects to the server's HTTP endpoint instead of launching a local process. This is useful for shared team servers or cloud-deployed MCP services.

typescript
1from agents.mcp import MCPServerStreamableHttp
2
3async with MCPServerStreamableHttp(
4 url="https://mcp.your-company.com/api",
5 headers={"Authorization": "Bearer your-token"},
6) as remote_server:
7 agent = Agent(
8 name="remote-assistant",
9 instructions="You help users with remote data.",
10 mcp_servers=[remote_server],
11 )
12 result = await Runner.run(agent, "What is the latest data?")
13 print(result.final_output)

Expected result: The agent connects to the remote MCP server over HTTP and uses its tools.

5

Combine multiple MCP servers in one agent

Pass multiple MCP servers to a single agent. The agent can use tools from all connected servers during its reasoning. This is powerful for building agents that combine data from different sources — for example, reading files with the filesystem server while searching the web with Brave Search.

typescript
1from agents import Agent, Runner
2from agents.mcp import MCPServerStdio
3
4async def main():
5 async with MCPServerStdio(
6 command="npx",
7 args=["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
8 ) as fs_server, MCPServerStdio(
9 command="npx",
10 args=["-y", "@modelcontextprotocol/server-brave-search"],
11 env={"BRAVE_API_KEY": "your-key"},
12 ) as search_server:
13
14 agent = Agent(
15 name="research-assistant",
16 instructions="You help users research topics using their files and the web.",
17 mcp_servers=[fs_server, search_server],
18 )
19
20 result = await Runner.run(
21 agent,
22 "Read my project's README and search the web for similar projects.",
23 )
24 print(result.final_output)
25
26import asyncio
27asyncio.run(main())

Expected result: The agent uses tools from both MCP servers to complete the research task.

6

Pass environment variables to MCP servers

Use the env parameter to pass environment variables (API keys, database URLs) to MCP servers. This works with both MCPServerStdio and MCPServerStreamableHttp. For teams building complex multi-server agent architectures, RapidDev can help design and implement the MCP integration layer.

typescript
1from agents.mcp import MCPServerStdio
2
3async with MCPServerStdio(
4 command="npx",
5 args=["-y", "@modelcontextprotocol/server-postgres"],
6 env={
7 "POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost:5432/mydb",
8 },
9) as db_server:
10 # db_server tools are now available
11 pass

Expected result: The MCP server receives environment variables for authentication and configuration.

Complete working example

agent_with_mcp.py
1"""OpenAI Agent with MCP server integration.
2
3Run: python agent_with_mcp.py
4Requires: pip install openai-agents
5Requires: OPENAI_API_KEY environment variable
6"""
7
8import asyncio
9import os
10from agents import Agent, Runner
11from agents.mcp import MCPServerStdio
12
13
14async def main():
15 # Connect to the filesystem MCP server
16 async with MCPServerStdio(
17 command="npx",
18 args=[
19 "-y",
20 "@modelcontextprotocol/server-filesystem",
21 os.path.expanduser("~/projects"),
22 ],
23 ) as filesystem_server:
24
25 # Create an agent with MCP tools
26 agent = Agent(
27 name="project-explorer",
28 instructions=(
29 "You are a helpful assistant that explores project files. "
30 "When asked about files, use the filesystem tools to read "
31 "and list files. Provide concise summaries of what you find."
32 ),
33 mcp_servers=[filesystem_server],
34 )
35
36 # Run the agent with a user query
37 result = await Runner.run(
38 agent,
39 "List the top-level files and directories in my projects folder. "
40 "For any README.md files you find, give me a one-sentence summary.",
41 )
42
43 print("Agent response:")
44 print(result.final_output)
45
46
47if __name__ == "__main__":
48 asyncio.run(main())

Common mistakes when using MCP with OpenAI Agents SDK

Why it's a problem: Not using async with for MCP server connections

How to avoid: MCP server connections must be used as async context managers (async with). This ensures proper initialization, cleanup, and process termination. Forgetting async with leads to leaked processes.

Why it's a problem: Forgetting to set the OPENAI_API_KEY environment variable

How to avoid: The Agents SDK requires a valid OpenAI API key. Set it as an environment variable: export OPENAI_API_KEY=sk-your-key. Without it, agent runs fail with authentication errors.

Why it's a problem: Using MCPServerStreamableHttp for local servers

How to avoid: Use MCPServerStdio for local servers launched as child processes. MCPServerStreamableHttp is for remote servers running on a separate machine. Using the wrong transport class causes connection failures.

Why it's a problem: Not passing -y in npx args

How to avoid: When using npx in MCPServerStdio args, always include '-y' as the first argument to prevent installation prompts that would hang the process.

Best practices

  • Always use async with for MCP server connections to ensure proper cleanup
  • Pass environment variables via the env parameter rather than relying on shell environment
  • Combine MCP servers with native Agent tools for maximum capability
  • Test MCP servers independently with MCP Inspector before integrating with agents
  • Use MCPServerStdio for local servers and MCPServerStreamableHttp for remote ones
  • Set timeouts on agent runs to prevent hanging when MCP servers are slow
  • Include -y in all npx commands to prevent interactive prompts
  • Log MCP tool calls during development to debug agent reasoning

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

Show me how to use MCP servers with the OpenAI Agents SDK in Python. I want to connect a filesystem MCP server and a Brave Search server to an agent. Show the complete code using MCPServerStdio with async context managers.

MCP Prompt

Help me build an OpenAI agent that uses MCP tools. I want to connect the filesystem server and PostgreSQL server. Show the complete Python code with MCPServerStdio, environment variables for the database, and a meaningful agent task.

Frequently asked questions

Does this work with GPT-4o and other OpenAI models?

Yes. The Agents SDK converts MCP tools into OpenAI's function calling format, which works with GPT-4o, GPT-4o-mini, and other function-calling-capable models. The MCP server does not know or care which model is calling it.

Can I mix MCP tools with native Agent tools?

Yes. An Agent can have both mcp_servers and native tools defined. The SDK combines all available tools and presents them to the model for selection during reasoning.

Is the OpenAI Agents SDK free?

The SDK itself is free and open source. You pay for OpenAI API usage when running agents. MCP servers themselves are free — there is no additional cost for MCP integration.

Can I use TypeScript MCP servers with the Python Agents SDK?

Yes. MCPServerStdio launches any command as a child process. TypeScript servers run via npx or node, Python servers via python or uv. The SDK communicates with them via the MCP protocol regardless of the server's language.

Can RapidDev help build agent workflows with MCP?

Yes. RapidDev helps teams design and build multi-agent systems that leverage MCP servers for data access, tool execution, and workflow automation. We specialize in connecting internal tools to AI agents via MCP.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.