Add structured logging and monitoring to your MCP server using pino or winston, writing all logs to stderr since stdout is reserved for the JSON-RPC protocol. Integrate Sentry for error tracking and Datadog or Prometheus for metrics. This tutorial covers log levels, correlation IDs, performance timing, and alerting on tool failures.
Adding Structured Logging and Monitoring to MCP Servers
MCP servers communicate over stdout using JSON-RPC, which means any stray console.log call corrupts the protocol stream. This tutorial shows how to set up structured logging that safely writes to stderr, captures tool call metrics, and integrates with error tracking (Sentry) and monitoring (Datadog) services. Proper observability is essential for debugging production MCP servers where you cannot attach a debugger.
Prerequisites
- A working MCP server with one or more tools
- Node.js 18+ and npm installed
- Optional: Sentry account and DSN for error tracking
- Optional: Datadog agent or Prometheus endpoint for metrics
Step-by-step guide
Install pino and configure stderr-only logging
Install pino and configure stderr-only logging
Pino is a fast, structured JSON logger for Node.js. Configure it to write exclusively to stderr using the destination option. Set the log level from an environment variable. Pino's structured JSON output is ideal for log aggregation services that parse JSON logs. Never use console.log in an MCP server — it writes to stdout and corrupts the JSON-RPC stream.
1npm install pino pino-pretty23// src/logger.ts4import pino from "pino";56export const logger = pino({7 level: process.env.LOG_LEVEL || "info",8 transport: process.env.NODE_ENV === "development" ? {9 target: "pino-pretty",10 options: { destination: 2 }, // 2 = stderr file descriptor11 } : undefined,12}, pino.destination(2)); // Always write to stderr (fd 2)1314// Usage:15// logger.info({ tool: "read_file", path: "/src" }, "Tool called");16// logger.error({ err: error, tool: "write_file" }, "Tool failed");Expected result: A logger instance that writes structured JSON to stderr, never to stdout.
Create a tool call logging wrapper
Create a tool call logging wrapper
Build a higher-order function that wraps tool handlers with automatic logging. It logs the tool name and parameters at call start, measures execution duration, and logs the result status (success, error) with timing. This gives you a complete audit trail of every tool invocation without modifying individual tool handlers.
1// src/tool-logger.ts2import { logger } from "./logger.js";3import { randomUUID } from "crypto";45export function withLogging<T>(6 toolName: string,7 handler: (params: T) => Promise<any>8): (params: T) => Promise<any> {9 return async (params: T) => {10 const requestId = randomUUID().slice(0, 8);11 const start = performance.now();1213 logger.info({14 requestId,15 tool: toolName,16 params: redactSensitive(params as Record<string, unknown>),17 }, `Tool call started: ${toolName}`);1819 try {20 const result = await handler(params);21 const durationMs = Math.round(performance.now() - start);2223 logger.info({24 requestId,25 tool: toolName,26 durationMs,27 isError: result.isError || false,28 }, `Tool call completed: ${toolName} (${durationMs}ms)`);2930 return result;31 } catch (error) {32 const durationMs = Math.round(performance.now() - start);3334 logger.error({35 requestId,36 tool: toolName,37 durationMs,38 err: error instanceof Error ? { message: error.message, stack: error.stack } : String(error),39 }, `Tool call failed: ${toolName}`);4041 return {42 content: [{ type: "text", text: `Error: ${error instanceof Error ? error.message : String(error)}` }],43 isError: true,44 };45 }46 };47}4849function redactSensitive(params: Record<string, unknown>): Record<string, unknown> {50 const redacted = { ...params };51 for (const key of ["password", "token", "apiKey", "secret"]) {52 if (key in redacted) redacted[key] = "[REDACTED]";53 }54 return redacted;55}Expected result: A withLogging wrapper that logs tool invocations with timing, status, and redacted parameters.
Integrate Sentry for automatic error tracking
Integrate Sentry for automatic error tracking
Sentry captures unhandled errors and can also track specific tool failures. Initialize Sentry at server startup, and add Sentry.captureException calls in the tool error handler. Tag each error with the tool name and request ID so you can filter and search in the Sentry dashboard. This catches errors you might miss in log files.
1npm install @sentry/node23// src/monitoring.ts4import * as Sentry from "@sentry/node";56export function initMonitoring(): void {7 if (process.env.SENTRY_DSN) {8 Sentry.init({9 dsn: process.env.SENTRY_DSN,10 environment: process.env.NODE_ENV || "development",11 tracesSampleRate: 0.1, // Sample 10% of transactions12 });13 console.error("[monitoring] Sentry initialized");14 }15}1617export function captureToolError(18 toolName: string,19 error: Error,20 params: Record<string, unknown>21): void {22 Sentry.withScope((scope) => {23 scope.setTag("mcp.tool", toolName);24 scope.setContext("tool_params", params);25 Sentry.captureException(error);26 });27}Expected result: Sentry captures tool errors with tool name tags and parameter context for debugging.
Track tool call metrics for dashboards and alerting
Track tool call metrics for dashboards and alerting
Track key metrics for each tool: call count, error count, and response time percentiles. Use a simple in-memory metrics collector that exposes data via a get_metrics tool or a Prometheus-compatible HTTP endpoint. These metrics let you build dashboards showing tool usage patterns and set alerts for error rate spikes or latency degradation.
1// src/metrics.ts2import { logger } from "./logger.js";34interface ToolMetrics {5 calls: number;6 errors: number;7 totalMs: number;8 maxMs: number;9 lastCallAt: number;10}1112const metrics = new Map<string, ToolMetrics>();1314export function recordToolCall(toolName: string, durationMs: number, isError: boolean): void {15 let m = metrics.get(toolName);16 if (!m) {17 m = { calls: 0, errors: 0, totalMs: 0, maxMs: 0, lastCallAt: 0 };18 metrics.set(toolName, m);19 }20 m.calls++;21 if (isError) m.errors++;22 m.totalMs += durationMs;23 m.maxMs = Math.max(m.maxMs, durationMs);24 m.lastCallAt = Date.now();2526 // Alert on high error rate27 if (m.calls > 10 && m.errors / m.calls > 0.5) {28 logger.warn({ tool: toolName, errorRate: (m.errors / m.calls).toFixed(2) },29 `High error rate detected for ${toolName}`);30 }31}3233export function getMetricsSummary(): Record<string, any> {34 const summary: Record<string, any> = {};35 for (const [tool, m] of metrics) {36 summary[tool] = {37 totalCalls: m.calls,38 errorCount: m.errors,39 errorRate: m.calls > 0 ? (m.errors / m.calls * 100).toFixed(1) + "%" : "0%",40 avgMs: m.calls > 0 ? Math.round(m.totalMs / m.calls) : 0,41 maxMs: m.maxMs,42 lastCallAt: new Date(m.lastCallAt).toISOString(),43 };44 }45 return summary;46}Expected result: An in-memory metrics tracker that records call counts, error rates, and latency per tool.
Register a metrics tool and wire everything together
Register a metrics tool and wire everything together
Register a get_server_metrics tool that returns the current metrics summary. This lets the AI (or an admin) check server health. Wire the logging wrapper, metrics recording, and Sentry integration into your tool registration flow. Each tool call is automatically logged, timed, and tracked.
1// src/index.ts2import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";3import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";4import { z } from "zod";5import { logger } from "./logger.js";6import { withLogging } from "./tool-logger.js";7import { initMonitoring } from "./monitoring.js";8import { getMetricsSummary } from "./metrics.js";910initMonitoring();1112const server = new McpServer({ name: "monitored-server", version: "1.0.0" });1314server.tool(15 "read_file",16 "Read a file's contents",17 { filePath: z.string() },18 withLogging("read_file", async ({ filePath }) => {19 const fs = await import("fs/promises");20 const content = await fs.readFile(filePath, "utf-8");21 return { content: [{ type: "text", text: content }] };22 })23);2425server.tool(26 "get_server_metrics",27 "Get server performance metrics for all tools",28 {},29 async () => {30 const summary = getMetricsSummary();31 return { content: [{ type: "text", text: JSON.stringify(summary, null, 2) }] };32 }33);3435async function main() {36 const transport = new StdioServerTransport();37 await server.connect(transport);38 logger.info("MCP server started with logging and monitoring");39}40main().catch(e => { logger.fatal(e, "Server failed to start"); process.exit(1); });Expected result: MCP server with structured logging, error tracking, and a metrics tool for observability.
Complete working example
1import pino from "pino";23// CRITICAL: MCP uses stdout for JSON-RPC. All logs MUST go to stderr (fd 2).4export const logger = pino(5 {6 level: process.env.LOG_LEVEL || "info",7 base: { service: "mcp-server" },8 timestamp: pino.stdTimeFunctions.isoTime,9 formatters: {10 level(label) { return { level: label }; },11 },12 },13 pino.destination(2) // fd 2 = stderr14);1516// Tool call logging wrapper17export function withLogging<T>(18 toolName: string,19 handler: (params: T) => Promise<any>20): (params: T) => Promise<any> {21 return async (params: T) => {22 const start = performance.now();23 try {24 const result = await handler(params);25 const ms = Math.round(performance.now() - start);26 logger.info({ tool: toolName, ms, ok: !result.isError }, "tool.call");27 return result;28 } catch (error) {29 const ms = Math.round(performance.now() - start);30 logger.error({ tool: toolName, ms, err: error }, "tool.error");31 return {32 content: [{ type: "text" as const,33 text: `Error: ${error instanceof Error ? error.message : String(error)}` }],34 isError: true as const,35 };36 }37 };38}3940// Redact sensitive fields before logging41export function redact(obj: Record<string, unknown>): Record<string, unknown> {42 const copy = { ...obj };43 for (const k of ["password", "token", "apiKey", "secret", "authorization"]) {44 if (k in copy) copy[k] = "[REDACTED]";45 }46 return copy;47}Common mistakes when adding logging and monitoring to an MCP server
Why it's a problem: Using console.log instead of a stderr logger, corrupting the MCP JSON-RPC protocol stream
How to avoid: Never use console.log in MCP servers. Use pino with destination(2) or console.error for all logging.
Why it's a problem: Logging sensitive parameters like passwords, API keys, or tokens
How to avoid: Implement a redaction function that replaces known sensitive field names with [REDACTED] before logging.
Why it's a problem: Not including timing information in logs, making performance debugging impossible
How to avoid: Use performance.now() to measure and log execution duration for every tool call.
Why it's a problem: Setting log level to debug in production, creating excessive log volume and costs
How to avoid: Use LOG_LEVEL=info in production and LOG_LEVEL=debug only for development or temporary troubleshooting.
Best practices
- Always log to stderr (fd 2) — stdout is reserved for MCP JSON-RPC communication
- Use structured JSON logs (pino) for machine-parseable output in production
- Include tool name, request ID, duration, and status in every log entry
- Redact sensitive parameters before logging
- Set up Sentry or equivalent for automatic error capture with tool context
- Track metrics per tool: call count, error count, average latency
- Alert on high error rates (>50%) or latency degradation (>2x baseline)
- Use log levels appropriately: error for failures, warn for degradation, info for operations
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
Set up structured logging for an MCP server in TypeScript. Use pino writing to stderr only (never stdout), create a withLogging wrapper for tool handlers that tracks timing and errors, add Sentry integration, and build a metrics summary tool.
Add logging and monitoring to my MCP server. Configure pino to write structured JSON to stderr, create a tool call logging wrapper with timing, integrate Sentry for error tracking, and add a get_server_metrics tool.
Frequently asked questions
Why can't I use console.log in MCP servers?
MCP uses stdout for the JSON-RPC protocol between client and server. Any data written to stdout by console.log corrupts the protocol stream, causing parse errors and disconnections. Always use console.error or a logger configured to write to stderr.
Which logging library is best for MCP servers?
Pino is recommended for its performance and structured JSON output. Winston is a good alternative if you need more transport options. Both must be configured to write to stderr, not stdout.
How do I view MCP server logs in real time?
For stdio servers, stderr output goes to the client's error stream. Redirect it to a file in your MCP config: add 2>>/tmp/mcp-server.log to the command args. Then tail the file.
Should I log tool parameters?
Log parameters for debugging, but always redact sensitive fields like passwords, tokens, and API keys. In high-security environments, log only the parameter names without values.
How do I send MCP server metrics to Datadog?
Export metrics in StatsD or Prometheus format. For StatsD, use the hot-shots npm package to send metrics to the local Datadog agent. For Prometheus, expose a /metrics HTTP endpoint on a separate port from the MCP server.
Can I use OpenTelemetry with MCP servers?
Yes. OpenTelemetry works well for distributed tracing across MCP clients and servers. Instrument tool handlers as spans and export to Jaeger, Zipkin, or your preferred backend.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation