Skip to main content
RapidDev - Software Development Agency
mcp-tutorial

How to chain multiple MCP tools in a workflow

Chain multiple MCP tools into multi-step workflows where each tool's output feeds the next step. Build pipelines like: query database, process results, write to file, and send notification — all orchestrated by the AI or by a custom client script. This tutorial covers tool chaining patterns, error handling between steps, and building a workflow orchestrator MCP server.

What you'll learn

  • How to chain MCP tool calls into multi-step workflows
  • How to pass data between tool calls in a pipeline
  • How to handle errors and rollbacks in multi-step chains
  • How to build a workflow orchestrator MCP server
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced11 min read30-40 minMCP TypeScript SDK v1.x, Node.js 18+, multiple MCP serversMarch 2026RapidDev Engineering Team
TL;DR

Chain multiple MCP tools into multi-step workflows where each tool's output feeds the next step. Build pipelines like: query database, process results, write to file, and send notification — all orchestrated by the AI or by a custom client script. This tutorial covers tool chaining patterns, error handling between steps, and building a workflow orchestrator MCP server.

Building Multi-Step Workflows by Chaining MCP Tools

Individual MCP tools are useful, but the real power comes from chaining them together. A database query feeds into a file write, which triggers a notification. This tutorial shows three approaches to tool chaining: letting the AI orchestrate naturally, building a custom client that scripts the chain, and creating a workflow MCP server with composite tools that execute multiple steps internally.

Prerequisites

  • Two or more MCP servers with tools to chain
  • Node.js 18+ and npm installed
  • Understanding of MCP tool call and result formats
  • Basic TypeScript knowledge

Step-by-step guide

1

Understand the three tool chaining patterns

There are three ways to chain MCP tools. First, natural AI orchestration: the AI calls tools sequentially, using each result to decide the next step. This is the simplest approach but depends on the AI's judgment. Second, scripted client orchestration: a custom MCP client calls tools in a predetermined sequence, passing data between steps programmatically. Third, composite server tools: build a single MCP tool that internally calls multiple operations and returns the combined result.

typescript
1// Pattern 1: AI orchestration (natural)
2// You just ask: "Query the orders table for this month, export as CSV,
3// and notify the team on Slack"
4// The AI calls: query_db → write_file → send_slack_message
5
6// Pattern 2: Scripted client orchestration
7const dbResult = await client.callTool({ name: "query_db", arguments: { sql: "SELECT ..." } });
8const csvPath = await client.callTool({ name: "write_csv", arguments: { data: dbResult, path: "report.csv" } });
9await client.callTool({ name: "send_notification", arguments: { message: `Report ready: ${csvPath}` } });
10
11// Pattern 3: Composite server tool
12server.tool("generate_and_notify_report", "...", schema, async (params) => {
13 const data = await queryDatabase(params.sql);
14 const path = await writeCsv(data, params.filename);
15 await sendNotification(`Report ready: ${path}`);
16 return { content: [{ type: "text", text: `Report generated and team notified` }] };
17});

Expected result: Understanding of the three chaining patterns and when to use each.

2

Build a scripted workflow using the MCP client SDK

Create a script that connects to multiple MCP servers and chains tool calls in a predetermined sequence. Use the MCP Client class to connect to each server, call tools in order, and pass results between steps. Handle errors at each step so the pipeline fails gracefully with a clear error message indicating which step failed.

typescript
1// src/workflow.ts
2import { Client } from "@modelcontextprotocol/sdk/client/index.js";
3import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
4
5async function connectServer(command: string, args: string[]): Promise<Client> {
6 const transport = new StdioClientTransport({ command, args });
7 const client = new Client({ name: "workflow", version: "1.0.0" }, { capabilities: {} });
8 await client.connect(transport);
9 return client;
10}
11
12function extractText(result: any): string {
13 return (result.content as any[])
14 .filter(c => c.type === "text")
15 .map(c => c.text)
16 .join("\n");
17}
18
19async function runPipeline() {
20 // Connect to servers
21 const dbClient = await connectServer("node", ["dist/db-server.js"]);
22 const fileClient = await connectServer("node", ["dist/file-server.js"]);
23
24 try {
25 // Step 1: Query database
26 console.error("[pipeline] Step 1: Querying database...");
27 const queryResult = await dbClient.callTool({
28 name: "query",
29 arguments: { sql: "SELECT * FROM orders WHERE created_at >= NOW() - INTERVAL '7 days'" },
30 });
31 if ((queryResult as any).isError) throw new Error(`Query failed: ${extractText(queryResult)}`);
32 const data = JSON.parse(extractText(queryResult));
33 console.error(`[pipeline] Got ${data.rows} rows`);
34
35 // Step 2: Write CSV report
36 console.error("[pipeline] Step 2: Writing CSV...");
37 const csvResult = await fileClient.callTool({
38 name: "write_file",
39 arguments: {
40 filePath: "reports/weekly-orders.csv",
41 content: formatAsCsv(data),
42 },
43 });
44 if ((csvResult as any).isError) throw new Error(`CSV write failed: ${extractText(csvResult)}`);
45
46 // Step 3: Write summary
47 console.error("[pipeline] Step 3: Writing summary...");
48 const summary = `Weekly Orders Report\nGenerated: ${new Date().toISOString()}\nTotal rows: ${data.rows}\nTotal revenue: $${data.data.reduce((s: number, r: any) => s + (r.total || 0), 0).toFixed(2)}`;
49 await fileClient.callTool({
50 name: "write_file",
51 arguments: { filePath: "reports/weekly-summary.txt", content: summary },
52 });
53
54 console.error("[pipeline] Complete!");
55 } finally {
56 await dbClient.close();
57 await fileClient.close();
58 }
59}
60
61function formatAsCsv(data: { columns: string[]; data: any[] }): string {
62 const header = data.columns.join(",");
63 const rows = data.data.map((row: any) =>
64 data.columns.map(col => JSON.stringify(row[col] ?? "")).join(",")
65 );
66 return [header, ...rows].join("\n");
67}
68
69runPipeline().catch(e => { console.error("Pipeline failed:", e); process.exit(1); });

Expected result: A script that chains database query, CSV export, and summary generation across MCP servers.

3

Build a composite workflow MCP server

For workflows that should be available as MCP tools themselves, build a server with composite tools that execute multi-step operations internally. Each composite tool encapsulates an entire pipeline — query, transform, write, notify — as a single tool call. This is the cleanest approach for standardized workflows that users trigger regularly.

typescript
1// src/workflow-server.ts
2import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
3import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
4import { z } from "zod";
5import pg from "pg";
6import fs from "fs/promises";
7
8const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });
9const server = new McpServer({ name: "workflow-server", version: "1.0.0" });
10
11server.tool(
12 "generate_report",
13 "Query data, generate CSV, and create summary report in one step",
14 {
15 query: z.string().describe("SQL SELECT query for the report data"),
16 reportName: z.string().describe("Name for the report files (no extension)"),
17 outputDir: z.string().default("/tmp/reports"),
18 },
19 async ({ query, reportName, outputDir }) => {
20 const steps: string[] = [];
21
22 try {
23 // Step 1: Execute query
24 if (/\b(DELETE|DROP|UPDATE|INSERT)\b/i.test(query)) {
25 return { content: [{ type: "text", text: "Error: Only SELECT queries allowed" }], isError: true };
26 }
27 const result = await pool.query({ text: query, timeout: 30000 });
28 steps.push(`Queried ${result.rowCount} rows`);
29
30 // Step 2: Generate CSV
31 await fs.mkdir(outputDir, { recursive: true });
32 const csvPath = `${outputDir}/${reportName}.csv`;
33 const header = result.fields.map(f => f.name).join(",");
34 const rows = result.rows.map(r =>
35 result.fields.map(f => JSON.stringify(r[f.name] ?? "")).join(",")
36 );
37 await fs.writeFile(csvPath, [header, ...rows].join("\n"));
38 steps.push(`CSV written to ${csvPath}`);
39
40 // Step 3: Generate summary
41 const summaryPath = `${outputDir}/${reportName}-summary.json`;
42 const summary = {
43 reportName,
44 generatedAt: new Date().toISOString(),
45 rowCount: result.rowCount,
46 columns: result.fields.map(f => f.name),
47 csvPath,
48 };
49 await fs.writeFile(summaryPath, JSON.stringify(summary, null, 2));
50 steps.push(`Summary written to ${summaryPath}`);
51
52 return {
53 content: [{ type: "text", text: `Report pipeline complete:\n${steps.map((s, i) => ` ${i + 1}. ${s}`).join("\n")}` }],
54 };
55 } catch (error) {
56 return {
57 content: [{
58 type: "text",
59 text: `Pipeline failed at step ${steps.length + 1}:\n` +
60 `Completed: ${steps.join(", ") || "none"}\n` +
61 `Error: ${error instanceof Error ? error.message : String(error)}`,
62 }],
63 isError: true,
64 };
65 }
66 }
67);

Expected result: A composite MCP tool that executes a multi-step report generation pipeline.

4

Add error handling and partial rollback to chains

Multi-step workflows need robust error handling. Track completed steps, clean up partial outputs on failure, and report exactly where the chain broke. For critical workflows, implement compensation logic that undoes completed steps when a later step fails. This prevents half-finished operations from leaving inconsistent state.

typescript
1// src/safe-pipeline.ts
2interface PipelineStep<T> {
3 name: string;
4 execute: (context: T) => Promise<T>;
5 rollback?: (context: T) => Promise<void>;
6}
7
8async function runPipeline<T>(steps: PipelineStep<T>[], initialContext: T): Promise<T> {
9 const completed: PipelineStep<T>[] = [];
10 let context = initialContext;
11
12 for (const step of steps) {
13 try {
14 console.error(`[pipeline] Running: ${step.name}`);
15 context = await step.execute(context);
16 completed.push(step);
17 console.error(`[pipeline] Completed: ${step.name}`);
18 } catch (error) {
19 console.error(`[pipeline] Failed: ${step.name} - ${error}`);
20
21 // Rollback completed steps in reverse order
22 for (const completedStep of completed.reverse()) {
23 if (completedStep.rollback) {
24 try {
25 console.error(`[pipeline] Rolling back: ${completedStep.name}`);
26 await completedStep.rollback(context);
27 } catch (rollbackError) {
28 console.error(`[pipeline] Rollback failed: ${completedStep.name} - ${rollbackError}`);
29 }
30 }
31 }
32
33 throw new Error(
34 `Pipeline failed at "${step.name}": ${error instanceof Error ? error.message : String(error)}\n` +
35 `Completed steps: ${completed.map(s => s.name).join(", ") || "none"}\n` +
36 `Rolled back: ${completed.filter(s => s.rollback).map(s => s.name).join(", ") || "none"}`
37 );
38 }
39 }
40
41 return context;
42}
43
44// Usage:
45// const result = await runPipeline([
46// { name: "query", execute: queryStep },
47// { name: "write-csv", execute: writeCsvStep, rollback: deleteFileStep },
48// { name: "notify", execute: notifyStep },
49// ], { sql: "SELECT ...", outputDir: "/tmp" });

Expected result: A pipeline runner with step tracking, error handling, and rollback capability.

5

Orchestrate cross-server tool chains for complex workflows

The most powerful pattern chains tools across multiple MCP servers: query a database, search related documents, post results to a project management tool, and send a notification. Build a client script that connects to all required servers and orchestrates the full workflow. For enterprises building complex multi-server workflows, RapidDev specializes in designing and implementing production-grade MCP orchestration pipelines.

typescript
1// Example: Incident investigation pipeline
2// Chains: Sentry → Database → Filesystem → Linear
3
4async function investigateIncident(project: string) {
5 const sentry = await connectServer("node", ["dist/sentry-server.js"]);
6 const db = await connectServer("node", ["dist/db-server.js"]);
7 const files = await connectServer("npx", ["-y", "@modelcontextprotocol/server-filesystem", "."]);
8 const linear = await connectServer("node", ["dist/linear-server.js"]);
9
10 try {
11 // Step 1: Get recent errors from Sentry
12 const errors = await sentry.callTool({
13 name: "recent_errors", arguments: { project, hours: 1 },
14 });
15 const topError = JSON.parse(extractText(errors))[0];
16
17 // Step 2: Get error details and stack trace
18 const details = await sentry.callTool({
19 name: "error_details", arguments: { issueId: topError.id },
20 });
21 const errorInfo = JSON.parse(extractText(details));
22
23 // Step 3: Query related database records
24 const dbResult = await db.callTool({
25 name: "query", arguments: {
26 sql: `SELECT * FROM error_logs WHERE message LIKE '%${topError.title.slice(0, 50)}%' LIMIT 5`,
27 },
28 });
29
30 // Step 4: Read the failing source file
31 const frame = errorInfo.frames?.[0];
32 const sourceCode = frame ? await files.callTool({
33 name: "read_file", arguments: { path: frame.file },
34 }) : null;
35
36 // Step 5: Create a Linear ticket with all context
37 await linear.callTool({
38 name: "create_issue",
39 arguments: {
40 title: `[Auto] ${topError.title}`,
41 description: `Error count: ${topError.count}\nStack: ${frame?.file}:${frame?.line}\nLast seen: ${topError.lastSeen}`,
42 teamKey: "ENG",
43 priority: topError.count > 100 ? 1 : 2,
44 },
45 });
46
47 console.error("Incident investigated and ticket created");
48 } finally {
49 await Promise.all([sentry.close(), db.close(), files.close(), linear.close()]);
50 }
51}

Expected result: A cross-server workflow that investigates an incident and creates a ticket automatically.

Complete working example

src/workflow-server.ts
1import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
2import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3import { z } from "zod";
4import pg from "pg";
5import fs from "fs/promises";
6
7const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });
8const BLOCKED = /\b(DELETE|DROP|TRUNCATE|UPDATE|INSERT|ALTER)\b/i;
9
10const server = new McpServer({ name: "workflow", version: "1.0.0" });
11
12server.tool("generate_report", "Run query → export CSV → create summary", {
13 sql: z.string(), name: z.string(), dir: z.string().default("/tmp/reports"),
14}, async ({ sql, name, dir }) => {
15 if (BLOCKED.test(sql)) return { content: [{ type: "text", text: "Only SELECT allowed" }], isError: true };
16 const steps: string[] = [];
17 try {
18 const r = await pool.query({ text: sql, timeout: 30000 });
19 steps.push(`Queried ${r.rowCount} rows`);
20
21 await fs.mkdir(dir, { recursive: true });
22 const csv = [r.fields.map(f => f.name).join(","),
23 ...r.rows.map(row => r.fields.map(f => JSON.stringify(row[f.name] ?? "")).join(","))
24 ].join("\n");
25 await fs.writeFile(`${dir}/${name}.csv`, csv);
26 steps.push(`CSV: ${dir}/${name}.csv`);
27
28 await fs.writeFile(`${dir}/${name}.json`, JSON.stringify({
29 name, at: new Date().toISOString(), rows: r.rowCount,
30 cols: r.fields.map(f => f.name),
31 }, null, 2));
32 steps.push(`Summary: ${dir}/${name}.json`);
33
34 return { content: [{ type: "text", text: steps.map((s,i) => `${i+1}. ${s}`).join("\n") }] };
35 } catch (e) {
36 return { content: [{ type: "text", text:
37 `Failed at step ${steps.length+1}. Done: ${steps.join("; ")}. Error: ${e}` }], isError: true };
38 }
39});
40
41server.tool("pipeline_status", "Check if pipeline output files exist", {
42 dir: z.string().default("/tmp/reports"), name: z.string(),
43}, async ({ dir, name }) => {
44 const csvExists = await fs.access(`${dir}/${name}.csv`).then(() => true).catch(() => false);
45 const jsonExists = await fs.access(`${dir}/${name}.json`).then(() => true).catch(() => false);
46 return { content: [{ type: "text", text: JSON.stringify({ csv: csvExists, summary: jsonExists }) }] };
47});
48
49async function main() {
50 await server.connect(new StdioServerTransport());
51 console.error("Workflow MCP server running");
52}
53main().catch(e => { console.error(e); process.exit(1); });

Common mistakes when chaining multiple MCP tools in a workflow

Why it's a problem: Not handling errors between chain steps, causing cascading failures without context

How to avoid: Wrap each step in try-catch, track completed steps, and report exactly which step failed with its error message.

Why it's a problem: Not cleaning up partial outputs when a chain fails mid-way

How to avoid: Implement rollback logic for write steps. Delete files that were created before the failure to avoid inconsistent state.

Why it's a problem: Passing raw MCP result objects between steps instead of extracting the text content

How to avoid: Always extract text from result.content before passing to the next step. Parse JSON results before using them.

Why it's a problem: Not closing MCP client connections, leaving zombie server processes

How to avoid: Close all client connections in a finally block, even when the pipeline fails.

Best practices

  • Track completed steps so error messages show exactly where the chain broke
  • Implement rollback logic for write operations in multi-step chains
  • Close all MCP client connections in finally blocks to prevent zombie processes
  • Extract and parse text content from results before passing to the next step
  • Use composite server tools for standardized workflows that run frequently
  • Use scripted client orchestration for complex cross-server pipelines
  • Log each step with [pipeline] prefix for clear workflow tracing
  • Set timeouts on each step to prevent the entire chain from hanging

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

Show me how to chain MCP tool calls into multi-step workflows. I need three patterns: AI orchestration, scripted client orchestration connecting to multiple servers, and composite server tools. Include error handling with step tracking and rollback. TypeScript.

MCP Prompt

Build an MCP workflow server with a generate_report tool that chains database query → CSV export → summary creation. Include error handling that reports which step failed. Also show a scripted pipeline using the MCP Client SDK to chain tools across multiple servers.

Frequently asked questions

Should I let the AI orchestrate tool chains or script them?

Let the AI orchestrate for ad-hoc, exploratory workflows. Script chains for recurring, predictable pipelines where you need reliability and consistency. Use composite tools when the pipeline should be callable as a single MCP tool.

How do I handle one step being slow in a long chain?

Set per-step timeouts and implement partial result reporting. If step 3 of 5 times out, report what steps 1-2 produced and let the user decide whether to retry.

Can I run chain steps in parallel instead of sequentially?

Yes, if steps are independent. Use Promise.all to run independent steps concurrently and Promise.allSettled if some steps can fail without blocking others.

How many steps can a tool chain have?

There is no technical limit. Practically, keep chains under 10 steps for maintainability. If a chain grows beyond that, split it into sub-workflows.

Can RapidDev help design complex MCP workflows?

Yes. RapidDev designs and implements multi-server MCP orchestration pipelines for enterprise workflows, including error handling, monitoring, and automated scheduling.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.