Connect MCP servers to LangChain by wrapping MCP tools as LangChain BaseTool instances. Use the MCP Client SDK to discover available tools, then create a LangChain StructuredTool for each one that forwards calls to the MCP server. This lets LangChain agents use any MCP server — filesystem, database, API — as a tool without rewriting the server code.
Using MCP Servers as LangChain Tools
LangChain is a popular framework for building LLM-powered applications with agents and tool use. MCP is the emerging standard for connecting AI to external tools and data. This tutorial bridges the two by showing how to wrap any MCP server's tools as LangChain StructuredTool instances. Once connected, a LangChain agent can call MCP tools alongside native LangChain tools, giving you access to the entire MCP ecosystem from within your LangChain application.
Prerequisites
- Node.js 18+ and npm installed
- A working MCP server to connect to (any server with at least one tool)
- LangChain.js installed (langchain and @langchain/core packages)
- Basic understanding of LangChain agents and tools
- An OpenAI or Anthropic API key for the LangChain LLM
Step-by-step guide
Install dependencies for MCP client and LangChain
Install dependencies for MCP client and LangChain
Install the MCP client SDK alongside LangChain. The MCP SDK provides the Client class for connecting to servers, and LangChain provides the agent framework. You need both @langchain/core for the base tool interface and langchain for the agent executor. Install the LLM provider package too — @langchain/openai or @langchain/anthropic.
1npm install @modelcontextprotocol/sdk langchain @langchain/core @langchain/openai zod2npm install -D typescript @types/nodeExpected result: All packages installed: MCP SDK, LangChain core, OpenAI provider, and Zod.
Create an MCP client connection to your server
Create an MCP client connection to your server
Use the MCP Client class to connect to an MCP server via stdio transport. The client spawns the server process, establishes the JSON-RPC connection, and can then discover and call tools. Keep the client connection open for the lifetime of your LangChain application. The client handles the MCP protocol negotiation automatically.
1// src/mcp-connection.ts2import { Client } from "@modelcontextprotocol/sdk/client/index.js";3import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";45export async function createMcpClient(6 command: string,7 args: string[],8 env?: Record<string, string>9): Promise<Client> {10 const transport = new StdioClientTransport({11 command,12 args,13 env: { ...process.env, ...env } as Record<string, string>,14 });1516 const client = new Client(17 { name: "langchain-mcp-bridge", version: "1.0.0" },18 { capabilities: {} }19 );2021 await client.connect(transport);22 console.error(`Connected to MCP server: ${command} ${args.join(" ")}`);23 return client;24}Expected result: A function that creates and returns a connected MCP client instance.
Discover MCP tools and convert them to LangChain StructuredTools
Discover MCP tools and convert them to LangChain StructuredTools
After connecting, call client.listTools() to discover all available tools on the MCP server. Each MCP tool has a name, description, and JSON Schema for its inputs. Convert each one into a LangChain StructuredTool by mapping the JSON Schema to a Zod schema and implementing the _call method to forward invocations to the MCP server via client.callTool(). This dynamic conversion means any MCP server works without hardcoding tool definitions.
1// src/mcp-to-langchain.ts2import { StructuredTool } from "@langchain/core/tools";3import { z } from "zod";4import { Client } from "@modelcontextprotocol/sdk/client/index.js";5import { jsonSchemaToZod } from "./schema-converter.js";67export async function mcpToolsToLangChain(client: Client): Promise<StructuredTool[]> {8 const { tools } = await client.listTools();9 10 return tools.map(mcpTool => {11 const zodSchema = jsonSchemaToZod(mcpTool.inputSchema);1213 return new McpLangChainTool({14 name: mcpTool.name,15 description: mcpTool.description || `MCP tool: ${mcpTool.name}`,16 schema: zodSchema,17 client,18 });19 });20}2122class McpLangChainTool extends StructuredTool {23 name: string;24 description: string;25 schema: z.ZodObject<any>;26 private client: Client;2728 constructor(config: {29 name: string;30 description: string;31 schema: z.ZodObject<any>;32 client: Client;33 }) {34 super();35 this.name = config.name;36 this.description = config.description;37 this.schema = config.schema;38 this.client = config.client;39 }4041 async _call(input: Record<string, unknown>): Promise<string> {42 const result = await this.client.callTool({43 name: this.name,44 arguments: input,45 });4647 const content = result.content as Array<{ type: string; text?: string }>;48 return content49 .filter(c => c.type === "text" && c.text)50 .map(c => c.text)51 .join("\n");52 }53}Expected result: A function that converts all MCP server tools into LangChain StructuredTool instances automatically.
Build a simple JSON Schema to Zod converter
Build a simple JSON Schema to Zod converter
MCP tools define their input schemas using JSON Schema, but LangChain's StructuredTool requires Zod schemas. Build a converter that handles the common JSON Schema types: string, number, boolean, object, and array. Handle required vs optional properties and default values. This does not need to cover the full JSON Schema spec — just the subset that MCP tools actually use.
1// src/schema-converter.ts2import { z } from "zod";34export function jsonSchemaToZod(schema: any): z.ZodObject<any> {5 if (!schema || schema.type !== "object" || !schema.properties) {6 return z.object({});7 }89 const shape: Record<string, z.ZodTypeAny> = {};10 const required = new Set(schema.required || []);1112 for (const [key, prop] of Object.entries(schema.properties)) {13 const p = prop as any;14 let field: z.ZodTypeAny;1516 switch (p.type) {17 case "string":18 field = z.string();19 break;20 case "number":21 case "integer":22 field = z.number();23 break;24 case "boolean":25 field = z.boolean();26 break;27 case "array":28 field = z.array(z.any());29 break;30 default:31 field = z.any();32 }3334 if (p.description) field = field.describe(p.description);35 if (!required.has(key)) field = field.optional();36 if (p.default !== undefined) field = field.default(p.default);3738 shape[key] = field;39 }4041 return z.object(shape);42}Expected result: A converter that transforms MCP JSON Schema tool definitions into Zod schemas for LangChain.
Wire everything into a LangChain agent
Wire everything into a LangChain agent
Create a LangChain agent that uses the MCP tools alongside any native LangChain tools. Initialize the MCP client, convert tools, create the agent with an LLM, and run it. The agent will automatically decide which MCP tools to call based on the user's query. Close the MCP connection when the agent is done.
1// src/agent.ts2import { ChatOpenAI } from "@langchain/openai";3import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";4import { ChatPromptTemplate } from "@langchain/core/prompts";5import { createMcpClient } from "./mcp-connection.js";6import { mcpToolsToLangChain } from "./mcp-to-langchain.js";78async function main() {9 // Connect to MCP server10 const mcpClient = await createMcpClient("node", ["path/to/mcp-server/dist/index.js"]);11 const mcpTools = await mcpToolsToLangChain(mcpClient);12 13 console.error(`Loaded ${mcpTools.length} MCP tools: ${mcpTools.map(t => t.name).join(", ")}`);1415 // Create LLM and agent16 const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });17 18 const prompt = ChatPromptTemplate.fromMessages([19 ["system", "You are a helpful assistant with access to tools. Use them to answer questions accurately."],20 ["human", "{input}"],21 ["placeholder", "{agent_scratchpad}"],22 ]);2324 const agent = await createOpenAIToolsAgent({ llm, tools: mcpTools, prompt });25 const executor = new AgentExecutor({ agent, tools: mcpTools, verbose: true });2627 // Run the agent28 const result = await executor.invoke({29 input: "List all TypeScript files in the src directory",30 });3132 console.log(result.output);3334 // Cleanup35 await mcpClient.close();36}3738main().catch(console.error);Expected result: A LangChain agent that discovers and calls MCP tools, returning results to the user.
Complete working example
1import { Client } from "@modelcontextprotocol/sdk/client/index.js";2import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";3import { StructuredTool } from "@langchain/core/tools";4import { ChatOpenAI } from "@langchain/openai";5import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";6import { ChatPromptTemplate } from "@langchain/core/prompts";7import { z } from "zod";89// --- JSON Schema to Zod ---10function jsonSchemaToZod(schema: any): z.ZodObject<any> {11 if (!schema?.properties) return z.object({});12 const shape: Record<string, z.ZodTypeAny> = {};13 const req = new Set(schema.required || []);14 for (const [k, v] of Object.entries(schema.properties)) {15 const p = v as any;16 let f: z.ZodTypeAny = p.type === "string" ? z.string()17 : p.type === "number" || p.type === "integer" ? z.number()18 : p.type === "boolean" ? z.boolean() : z.any();19 if (p.description) f = f.describe(p.description);20 if (!req.has(k)) f = f.optional();21 shape[k] = f;22 }23 return z.object(shape);24}2526// --- MCP Tool Wrapper ---27class McpTool extends StructuredTool {28 name: string;29 description: string;30 schema: z.ZodObject<any>;31 private client: Client;32 constructor(name: string, desc: string, schema: z.ZodObject<any>, client: Client) {33 super(); this.name = name; this.description = desc; this.schema = schema; this.client = client;34 }35 async _call(input: Record<string, unknown>): Promise<string> {36 const res = await this.client.callTool({ name: this.name, arguments: input });37 return (res.content as any[]).filter(c => c.text).map(c => c.text).join("\n");38 }39}4041// --- Main ---42async function main() {43 const transport = new StdioClientTransport({44 command: process.argv[2] || "node",45 args: process.argv.slice(3),46 });47 const client = new Client({ name: "langchain-bridge", version: "1.0.0" }, { capabilities: {} });48 await client.connect(transport);4950 const { tools } = await client.listTools();51 const lcTools = tools.map(t =>52 new McpTool(t.name, t.description || t.name, jsonSchemaToZod(t.inputSchema), client)53 );54 console.error(`Loaded tools: ${lcTools.map(t => t.name).join(", ")}`);5556 const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });57 const prompt = ChatPromptTemplate.fromMessages([58 ["system", "You are a helpful assistant. Use tools to answer questions."],59 ["human", "{input}"],60 ["placeholder", "{agent_scratchpad}"],61 ]);6263 const agent = await createOpenAIToolsAgent({ llm, tools: lcTools, prompt });64 const executor = new AgentExecutor({ agent, tools: lcTools });6566 const result = await executor.invoke({ input: process.argv.at(-1) || "What tools do you have?" });67 console.log(result.output);68 await client.close();69}70main().catch(e => { console.error(e); process.exit(1); });Common mistakes when using MCP with LangChain
Why it's a problem: Not closing the MCP client connection, leaving zombie server processes
How to avoid: Always call client.close() in a finally block or cleanup handler when the LangChain agent finishes.
Why it's a problem: Hardcoding MCP tool definitions instead of discovering them dynamically
How to avoid: Use client.listTools() to discover tools at runtime. This way your bridge works with any MCP server without code changes.
Why it's a problem: Ignoring MCP error results (isError: true) and treating them as successful responses
How to avoid: Check the isError flag on MCP tool results and throw an error in the LangChain _call method so the agent knows the tool failed.
Why it's a problem: Using LangChain's DynamicTool instead of StructuredTool, losing input schema information
How to avoid: Always use StructuredTool with a proper Zod schema. The schema helps the LLM understand what parameters each tool expects.
Best practices
- Use StructuredTool with Zod schemas so the LLM gets typed parameter information
- Discover tools dynamically with listTools() instead of hardcoding tool definitions
- Close MCP client connections when the agent session ends to prevent zombie processes
- Pass environment variables through the StdioClientTransport for server configuration
- Handle non-text MCP content types by converting them to descriptive strings
- Set temperature to 0 for tool-calling agents to get more deterministic tool selection
- Log discovered tool names to stderr for debugging connection issues
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
Show me how to connect an MCP server to a LangChain agent in TypeScript. I want to dynamically discover MCP tools, wrap them as LangChain StructuredTools with Zod schemas, and use them in an OpenAI tools agent. Include the JSON Schema to Zod conversion.
Build a bridge between MCP servers and LangChain. Create an MCP client that connects via stdio, discovers tools with listTools(), converts JSON Schema to Zod, wraps tools as StructuredTool instances, and runs them in a LangChain AgentExecutor.
Frequently asked questions
Can I connect multiple MCP servers to one LangChain agent?
Yes. Create multiple MCP client connections and convert each server's tools to LangChain tools. Merge all tools into a single array and pass them to the agent executor.
Does this work with Anthropic Claude as the LangChain LLM?
Yes. Replace ChatOpenAI with ChatAnthropic from @langchain/anthropic. The tool wrapping is LLM-agnostic — any LangChain chat model that supports tool calling works.
How do I handle MCP tools that return images or binary data?
Check the content type in the MCP result. For images, convert to a description like '[Image: filename.png, 1024x768]'. LangChain tools return strings, so non-text content must be described textually.
Is there an official LangChain-MCP integration package?
As of March 2026, the community maintains several bridge packages, but no official LangChain-MCP integration exists. The manual approach in this tutorial gives you full control over the conversion.
Can RapidDev help build custom LangChain-MCP integrations?
Yes, RapidDev has experience building production LangChain agents that use MCP servers for data access. They can help with architecture, tool selection, and deployment strategies.
What happens if the MCP server crashes mid-call?
The MCP client throws a transport error. Wrap callTool in a try-catch in the _call method and return a descriptive error message so the LangChain agent can recover gracefully.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation