Model Context Protocol (MCP) is an open standard created by Anthropic and donated to the Linux Foundation that lets AI assistants connect to external tools, databases, and APIs through a universal JSON-RPC 2.0 interface. Think of it as a USB-C port for AI — one protocol that works across Claude Desktop, Cursor, VS Code, Windsurf, and thousands of community-built servers.
Why MCP Exists and What It Changes
Before MCP, every AI tool needed its own custom integration for every external service. Cursor had its own way of connecting to databases, Claude Desktop had another, and each new tool meant building from scratch. MCP standardizes this into a single protocol: build one MCP server, and it works everywhere. Created by Anthropic in late 2024 and donated to the Linux Foundation in December 2025, MCP has grown to over 97 million monthly SDK downloads and more than 10,000 community servers. It is the de facto standard for connecting AI models to the outside world.
Prerequisites
- Basic understanding of what AI assistants like Claude or ChatGPT do
- Familiarity with the concept of APIs (you do not need to know how to build one)
- A computer with any MCP-compatible host installed (optional for this overview)
Step-by-step guide
Understand the core problem MCP solves
Understand the core problem MCP solves
AI models are powerful but isolated. They can only work with what you paste into the chat window. If you want Claude to read your database, check your GitHub issues, or search your company docs, someone has to build a bridge. Before MCP, every bridge was custom-built for each AI tool. MCP creates a universal bridge protocol so that a single server implementation works across every compatible AI host. This is the same pattern that USB solved for hardware — one connector, many devices.
Expected result: You understand that MCP is a standardized protocol that replaces custom one-off integrations between AI tools and external services.
Learn the three core capabilities of MCP
Learn the three core capabilities of MCP
MCP servers expose three types of capabilities to AI hosts. Tools are actions the AI model can invoke on its own (like running a database query or creating a GitHub issue). Resources are data the application can pull in for context (like file contents or API documentation). Prompts are reusable templates that users can select to guide the AI (like a code review checklist). Each capability has a different control model: tools are model-controlled, resources are application-controlled, and prompts are user-controlled.
Expected result: You can explain the difference between MCP tools, resources, and prompts and when each is appropriate.
Explore the MCP architecture
Explore the MCP architecture
MCP uses a three-layer architecture. The Host is the AI application you interact with (Claude Desktop, Cursor, VS Code). Inside the host, an MCP Client manages the connection to exactly one MCP Server. Each server connects to one external service (a database, an API, or the local filesystem). A single host can run multiple clients, each connected to a different server, giving the AI access to many tools simultaneously. Communication uses JSON-RPC 2.0 over either stdio (for local servers) or Streamable HTTP (for remote servers).
1// Conceptual architecture (not runnable code)2// 3// Host (Claude Desktop)4// ├── MCP Client 1 ──── MCP Server (Filesystem) ──── Local files5// ├── MCP Client 2 ──── MCP Server (GitHub) ──────── GitHub API6// └── MCP Client 3 ──── MCP Server (Postgres) ────── DatabaseExpected result: You understand the Host → Client → Server → External Service architecture and how multiple servers can run in parallel.
See which AI tools support MCP today
See which AI tools support MCP today
As of March 2026, MCP is supported by Claude Desktop, Claude Code (CLI), Cursor, VS Code (GitHub Copilot), Windsurf, Cline, Zed, and the OpenAI Agents SDK among others. Each tool uses a slightly different configuration file: Claude Desktop uses claude_desktop_config.json, Cursor uses .cursor/mcp.json, VS Code uses .vscode/mcp.json (with a 'servers' key instead of 'mcpServers'), and Windsurf uses ~/.codeium/windsurf/mcp_config.json. Despite the different config file locations, they all speak the same MCP protocol to the servers.
Expected result: You know which AI tools support MCP and that configuration varies by host while the protocol stays the same.
Browse the MCP server ecosystem
Browse the MCP server ecosystem
The MCP ecosystem includes over 10,000 servers. Popular official servers include Filesystem (read/write local files), GitHub (manage repos, issues, PRs), PostgreSQL (query databases), Brave Search (web search), Puppeteer (browser automation), Memory (persistent key-value context), and Slack (channel messaging). Community servers cover everything from Jira and Linear to Google Drive and Sentry. You can find servers on the official MCP servers repository, npm, and community directories. Most servers can be run with a single npx command.
1// Example: running the Filesystem MCP server with npx2// npx -y @modelcontextprotocol/server-filesystem /path/to/allowed/directoryExpected result: You know where to find MCP servers and understand the breadth of available integrations.
Understand how to get started building or using MCP
Understand how to get started building or using MCP
If you want to use existing MCP servers, you just need to add them to your AI tool's configuration file and restart. If you want to build your own MCP server, the TypeScript SDK (@modelcontextprotocol/sdk) and Python SDK (mcp[cli] with FastMCP) make it straightforward. A minimal server is about 20 lines of code. The MCP Inspector tool lets you test servers interactively before connecting them to an AI host. For teams building complex AI-powered applications, RapidDev can help architect MCP server integrations that connect your existing tools and databases to AI workflows.
Expected result: You have a clear mental roadmap for either using existing MCP servers or building your own.
Complete working example
1// A minimal MCP server to show the basic structure2// This is what an MCP server looks like at its simplest34import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";5import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";6import { z } from "zod";78// 1. Create a server instance9const server = new McpServer({10 name: "hello-world",11 version: "1.0.0",12});1314// 2. Define a tool15server.tool(16 "greet",17 "Say hello to someone",18 { name: z.string().describe("The name to greet") },19 async ({ name }) => ({20 content: [{ type: "text", text: `Hello, ${name}! Welcome to MCP.` }],21 })22);2324// 3. Connect via stdio transport25const transport = new StdioServerTransport();26await server.connect(transport);2728// That is it. This server exposes one tool that any29// MCP-compatible host (Claude Desktop, Cursor, etc.) can call.Common mistakes
Why it's a problem: Confusing MCP with function calling
How to avoid: Function calling is model-specific (OpenAI functions, Claude tool_use). MCP is a transport protocol that works across any model and any host. MCP servers can be used by function-calling models, but MCP itself is not function calling.
Why it's a problem: Thinking MCP requires a cloud server
How to avoid: Most MCP servers run locally on your machine using stdio transport. The server process starts when your AI host launches it and communicates over standard input/output. No cloud deployment needed for local tools.
Why it's a problem: Writing to stdout in an MCP server
How to avoid: Stdio-based MCP servers use stdout for protocol messages. If your server writes logs or debug output to stdout, it will corrupt the JSON-RPC communication. Always use stderr (console.error) for logging.
Why it's a problem: Assuming all AI tools use the same config format
How to avoid: Claude Desktop and Cursor use 'mcpServers' as the config key, but VS Code uses 'servers'. Windsurf has its own config path. Always check the specific host's documentation.
Best practices
- Start by using existing MCP servers before building your own — the ecosystem has 10,000+ servers covering most common use cases
- Run MCP servers locally with stdio transport for development, then consider Streamable HTTP for production remote deployment
- Never write to stdout in stdio-based MCP servers — use stderr for all logging and debug output
- Use the MCP Inspector (npx -y @modelcontextprotocol/inspector) to test servers before connecting them to your AI host
- Keep MCP servers focused on a single domain (one server per service) rather than building monolithic servers
- Always validate tool inputs with schemas (Zod in TypeScript, type hints in Python) to prevent errors from malformed AI requests
- Pin your MCP SDK versions in package.json to avoid breaking changes between protocol versions
- Use environment variables for API keys and secrets — never hardcode credentials in MCP server source code
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
Explain Model Context Protocol (MCP) in simple terms. What problem does it solve, how does the host-client-server architecture work, and what are the three capability types (tools, resources, prompts)? Give me a practical example of how MCP connects Claude Desktop to a PostgreSQL database.
I am new to MCP. Explain what Model Context Protocol is, how it works (hosts, clients, servers, transports), and list five popular MCP servers I should try first. Show me the config I would need to add one to my current editor.
Frequently asked questions
Is MCP only for Claude and Anthropic products?
No. MCP was created by Anthropic but donated to the Linux Foundation in December 2025. It is an open standard supported by Claude Desktop, Cursor, VS Code, Windsurf, OpenAI Agents SDK, Cline, Zed, and many other tools. Any AI host can implement the MCP client specification.
Do I need to know how to code to use MCP?
To use existing MCP servers, you only need to edit a JSON configuration file — no coding required. Building your own MCP server does require TypeScript or Python knowledge, but connecting pre-built servers is a copy-paste configuration task.
Is MCP free to use?
Yes. MCP is an open-source protocol. The TypeScript and Python SDKs are free and open source. Most official MCP servers are free. The AI hosts themselves (Claude Desktop, Cursor) have their own pricing, but MCP adds no additional cost.
How is MCP different from OpenAI function calling?
Function calling is a model-level feature where you define functions in API requests. MCP is a transport protocol that sits between AI hosts and external services. MCP servers can be called by models that use function calling, but MCP works across different models and hosts. One MCP server works in Claude, Cursor, and VS Code without changes.
Can RapidDev help set up MCP for my team?
Yes. RapidDev helps teams architect and build custom MCP server integrations that connect internal tools, databases, and APIs to AI workflows. This is especially valuable for organizations with complex internal systems that need custom servers beyond what the community provides.
How many MCP servers can I run at once?
Most hosts support multiple concurrent MCP servers. Claude Desktop and Cursor have no hard-coded limit. Windsurf has a 100-tool limit across all connected servers. In practice, running 5-10 servers simultaneously is common and works well.
Does MCP send my data to the cloud?
Not inherently. Stdio-based MCP servers run entirely on your local machine. Data flows between the AI host and the local server process without leaving your computer. Remote MCP servers using Streamable HTTP do communicate over the network, but you control where those servers are deployed.
What programming languages can I use to build MCP servers?
The official SDKs support TypeScript/JavaScript and Python. Community SDKs exist for Rust, Go, C#, Java, Ruby, and other languages. TypeScript and Python have the most mature tooling and documentation.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation