Skip to main content
RapidDev - Software Development Agency
mcp-tutorial

How to build an MCP server that queries a database

Build an MCP server that gives AI clients safe access to a PostgreSQL database. Use connection pooling with pg, expose tools for running parameterized queries, and register resources for schema inspection. Always use parameterized queries to prevent SQL injection and restrict to read-only operations unless write access is explicitly needed.

What you'll learn

  • How to connect an MCP server to PostgreSQL with connection pooling
  • How to create safe database query tools with parameterized queries
  • How to expose database schema as MCP resources
  • How to implement read-only safeguards for AI-accessible databases
  • How to handle database errors in MCP tool results
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced9 min read30-45 minMCP TypeScript SDK 1.x, PostgreSQL 13+, pg/node-postgresMarch 2026RapidDev Engineering Team
TL;DR

Build an MCP server that gives AI clients safe access to a PostgreSQL database. Use connection pooling with pg, expose tools for running parameterized queries, and register resources for schema inspection. Always use parameterized queries to prevent SQL injection and restrict to read-only operations unless write access is explicitly needed.

Building an MCP Server with Database Access

Connecting an MCP server to a database lets AI clients query data, inspect schemas, and perform data operations through natural language. This is one of the most powerful MCP patterns — the AI can answer questions about your data without you writing custom queries.

However, database access requires careful security design. AI models can generate unexpected queries, so you need parameterized queries to prevent SQL injection, read-only restrictions to prevent accidental data modification, and connection pooling to prevent resource exhaustion. This tutorial builds a production-ready PostgreSQL MCP server step by step.

Prerequisites

  • PostgreSQL 13+ running locally or remotely
  • MCP TypeScript SDK installed (@modelcontextprotocol/sdk)
  • pg (node-postgres) package installed
  • A database with tables and data to query
  • Basic SQL knowledge

Step-by-step guide

1

Set up connection pooling with pg

Create a PostgreSQL connection pool that manages connections efficiently. Connection pooling is critical for MCP servers because each tool call may execute a query, and creating a new connection per call would quickly exhaust database resources. Configure the pool with reasonable limits and load the connection string from an environment variable.

typescript
1// TypeScript
2import { Pool } from "pg";
3
4const pool = new Pool({
5 connectionString: process.env.DATABASE_URL,
6 max: 10, // max connections in pool
7 idleTimeoutMillis: 30000, // close idle connections after 30s
8 connectionTimeoutMillis: 5000, // fail if can't connect in 5s
9});
10
11// Test connection on startup
12pool.query("SELECT 1").then(() => {
13 console.error("Database connected");
14}).catch((err) => {
15 console.error("Database connection failed:", err.message);
16 process.exit(1);
17});

Expected result: A connection pool ready to handle concurrent queries from MCP tool calls.

2

Create a read-only query tool with parameterized queries

Register a tool that accepts SQL queries from the AI but restricts them to SELECT statements. Validate that the query starts with SELECT before executing. Always use parameterized queries when the tool accepts user-provided values to prevent SQL injection. Return results as formatted JSON.

typescript
1// TypeScript
2import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
3import { z } from "zod";
4
5const server = new McpServer({ name: "postgres-server", version: "1.0.0" });
6
7server.registerTool("query", {
8 description: "Run a read-only SQL query against the database. Only SELECT statements are allowed.",
9 inputSchema: {
10 sql: z.string().describe("SQL SELECT query to execute"),
11 },
12}, async ({ sql }) => {
13 const trimmed = sql.trim();
14
15 // Enforce read-only
16 if (!/^SELECT\b/i.test(trimmed)) {
17 return {
18 content: [{ type: "text", text: "Error: Only SELECT queries are allowed. Use INSERT, UPDATE, or DELETE tools for write operations." }],
19 isError: true,
20 };
21 }
22
23 // Block dangerous patterns
24 if (/;\s*(DROP|DELETE|UPDATE|INSERT|ALTER|CREATE|TRUNCATE)/i.test(trimmed)) {
25 return {
26 content: [{ type: "text", text: "Error: Multi-statement queries with write operations are not allowed." }],
27 isError: true,
28 };
29 }
30
31 try {
32 const result = await pool.query(trimmed);
33 return {
34 content: [{
35 type: "text",
36 text: JSON.stringify({
37 rowCount: result.rowCount,
38 rows: result.rows.slice(0, 100), // Limit response size
39 }, null, 2),
40 }],
41 };
42 } catch (error) {
43 console.error("Query error:", error);
44 return {
45 content: [{ type: "text", text: `Error: ${(error as Error).message}` }],
46 isError: true,
47 };
48 }
49});

Expected result: The AI can run SELECT queries and receive JSON-formatted results, but cannot execute write operations.

3

Expose database schema as a resource

Register an MCP resource that returns your database schema. This gives the AI the context it needs to write correct queries without you listing every table and column in tool descriptions. The schema resource should include table names, column names, types, and relationships.

typescript
1// TypeScript
2server.registerResource("database-schema", "db://schema", {
3 description: "Complete database schema with tables, columns, and types",
4 mimeType: "text/plain",
5}, async (uri) => {
6 const result = await pool.query(`
7 SELECT
8 t.table_name,
9 c.column_name,
10 c.data_type,
11 c.is_nullable,
12 c.column_default
13 FROM information_schema.tables t
14 JOIN information_schema.columns c
15 ON t.table_name = c.table_name AND t.table_schema = c.table_schema
16 WHERE t.table_schema = 'public'
17 ORDER BY t.table_name, c.ordinal_position
18 `);
19
20 // Format as readable text
21 const tables = new Map<string, string[]>();
22 for (const row of result.rows) {
23 const cols = tables.get(row.table_name) || [];
24 cols.push(` ${row.column_name} ${row.data_type}${row.is_nullable === 'NO' ? ' NOT NULL' : ''}${row.column_default ? ` DEFAULT ${row.column_default}` : ''}`);
25 tables.set(row.table_name, cols);
26 }
27
28 const schema = Array.from(tables.entries())
29 .map(([table, cols]) => `TABLE ${table}:\n${cols.join('\n')}`)
30 .join('\n\n');
31
32 return {
33 contents: [{ uri: uri.href, text: schema }],
34 };
35});

Expected result: The AI reads the schema resource to understand your database structure before writing queries.

4

Add a table listing tool for quick discovery

A lightweight tool that lists all tables with row counts helps the AI quickly understand the database without reading the full schema. This is faster for initial exploration and costs less context than the full schema resource.

typescript
1// TypeScript
2server.registerTool("list-tables", {
3 description: "List all tables in the database with row counts",
4 inputSchema: {},
5}, async () => {
6 try {
7 const result = await pool.query(`
8 SELECT
9 schemaname,
10 relname AS table_name,
11 n_live_tup AS approximate_row_count
12 FROM pg_stat_user_tables
13 ORDER BY n_live_tup DESC
14 `);
15 return {
16 content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }],
17 };
18 } catch (error) {
19 return {
20 content: [{ type: "text", text: `Error: ${(error as Error).message}` }],
21 isError: true,
22 };
23 }
24});

Expected result: The AI gets a quick overview of all tables and their sizes before diving into specific queries.

5

Add a describe-table tool for column details

Give the AI a tool to inspect a specific table's columns, types, constraints, and indexes. This is more efficient than loading the entire schema when the AI only needs details about one table. For teams building database MCP servers for production analytics, RapidDev can help design the query layer with proper indexing and access control.

typescript
1// TypeScript
2server.registerTool("describe-table", {
3 description: "Get detailed column information for a specific table",
4 inputSchema: {
5 tableName: z.string().regex(/^[a-zA-Z_][a-zA-Z0-9_]*$/).describe("Table name (alphanumeric and underscores only)"),
6 },
7}, async ({ tableName }) => {
8 try {
9 const columns = await pool.query(
10 `SELECT column_name, data_type, is_nullable, column_default
11 FROM information_schema.columns
12 WHERE table_schema = 'public' AND table_name = $1
13 ORDER BY ordinal_position`,
14 [tableName]
15 );
16
17 if (columns.rowCount === 0) {
18 return {
19 content: [{ type: "text", text: `Error: Table '${tableName}' not found` }],
20 isError: true,
21 };
22 }
23
24 const indexes = await pool.query(
25 `SELECT indexname, indexdef FROM pg_indexes
26 WHERE schemaname = 'public' AND tablename = $1`,
27 [tableName]
28 );
29
30 return {
31 content: [{
32 type: "text",
33 text: JSON.stringify({ columns: columns.rows, indexes: indexes.rows }, null, 2),
34 }],
35 };
36 } catch (error) {
37 return {
38 content: [{ type: "text", text: `Error: ${(error as Error).message}` }],
39 isError: true,
40 };
41 }
42});

Expected result: The AI gets column details and indexes for a specific table before constructing targeted queries.

Complete working example

src/index.ts
1import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
2import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3import { z } from "zod";
4import { Pool } from "pg";
5
6const pool = new Pool({
7 connectionString: process.env.DATABASE_URL,
8 max: 10,
9 idleTimeoutMillis: 30000,
10});
11
12const server = new McpServer({ name: "postgres-mcp", version: "1.0.0" });
13
14// Resource: full schema
15server.registerResource("schema", "db://schema", {
16 description: "Database schema",
17 mimeType: "text/plain",
18}, async (uri) => {
19 const r = await pool.query(
20 `SELECT table_name, column_name, data_type, is_nullable
21 FROM information_schema.columns WHERE table_schema='public'
22 ORDER BY table_name, ordinal_position`
23 );
24 const tables = new Map<string, string[]>();
25 for (const row of r.rows) {
26 const c = tables.get(row.table_name) || [];
27 c.push(` ${row.column_name} ${row.data_type}${row.is_nullable==='NO'?' NOT NULL':''}`);
28 tables.set(row.table_name, c);
29 }
30 const text = [...tables].map(([t,c])=>`TABLE ${t}:\n${c.join('\n')}`).join('\n\n');
31 return { contents: [{ uri: uri.href, text }] };
32});
33
34// Tool: list tables
35server.registerTool("list-tables", {
36 description: "List all database tables with approximate row counts",
37 inputSchema: {},
38}, async () => {
39 const r = await pool.query(
40 `SELECT relname AS table_name, n_live_tup AS row_count
41 FROM pg_stat_user_tables ORDER BY n_live_tup DESC`
42 );
43 return { content: [{ type: "text", text: JSON.stringify(r.rows, null, 2) }] };
44});
45
46// Tool: describe table
47server.registerTool("describe-table", {
48 description: "Get columns and indexes for a table",
49 inputSchema: {
50 table: z.string().regex(/^[a-zA-Z_]\w*$/).describe("Table name"),
51 },
52}, async ({ table }) => {
53 const cols = await pool.query(
54 `SELECT column_name,data_type,is_nullable,column_default
55 FROM information_schema.columns
56 WHERE table_schema='public' AND table_name=$1
57 ORDER BY ordinal_position`, [table]
58 );
59 if (!cols.rowCount) return { content:[{type:"text",text:`Table '${table}' not found`}], isError:true };
60 return { content: [{ type: "text", text: JSON.stringify(cols.rows, null, 2) }] };
61});
62
63// Tool: read-only query
64server.registerTool("query", {
65 description: "Run a read-only SELECT query",
66 inputSchema: { sql: z.string().describe("SQL SELECT query") },
67}, async ({ sql }) => {
68 if (!/^SELECT\b/i.test(sql.trim())) {
69 return { content:[{type:"text",text:"Error: Only SELECT allowed"}], isError:true };
70 }
71 try {
72 const r = await pool.query(sql);
73 return { content: [{ type: "text", text: JSON.stringify({
74 rowCount: r.rowCount, rows: r.rows.slice(0, 100)
75 }, null, 2) }] };
76 } catch (e) {
77 return { content:[{type:"text",text:`Error: ${(e as Error).message}`}], isError:true };
78 }
79});
80
81const transport = new StdioServerTransport();
82await server.connect(transport);
83console.error("PostgreSQL MCP server running");

Common mistakes when building an MCP server that queries a database

Why it's a problem: Not using connection pooling

How to avoid: Create a Pool instance and reuse it across all tool calls. Creating a new connection per query exhausts database resources quickly.

Why it's a problem: Allowing arbitrary SQL without validation

How to avoid: Validate that queries start with SELECT. Block multi-statement queries containing write operations. Consider using a read-only database user.

Why it's a problem: Not limiting result size

How to avoid: Slice results to a reasonable maximum (e.g., 100 rows). Large result sets overflow the AI's context window and waste tokens.

Why it's a problem: Hardcoding the database connection string

How to avoid: Load DATABASE_URL from environment variables. Never commit connection strings to source control.

Why it's a problem: Using string concatenation for query parameters

How to avoid: Always use parameterized queries ($1, $2) with the values array. String concatenation enables SQL injection.

Best practices

  • Use a dedicated read-only database user for the MCP server connection
  • Always use parameterized queries ($1, $2) — never concatenate values into SQL strings
  • Expose the database schema as a resource so the AI can write correct queries
  • Limit query results to 100 rows to prevent context window overflow
  • Set query timeouts (statement_timeout) to prevent runaway queries
  • Log all queries to stderr for audit and debugging purposes
  • Use connection pooling with reasonable limits (10-20 connections)
  • Validate table names with regex when used in query identifiers

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm building an MCP server that connects to PostgreSQL. Show me how to create a read-only query tool with connection pooling, parameterized queries, SQL injection prevention, and result size limiting using the MCP TypeScript SDK and pg.

MCP Prompt

Add a database query tool to my MCP server. Use the pg Pool for connection management, validate that only SELECT queries are allowed, limit results to 100 rows, and return errors with isError: true. Also add a schema resource.

Frequently asked questions

Should I use a read-only database user for the MCP server?

Yes. Create a PostgreSQL user with SELECT-only grants and use that for the MCP connection. This provides defense in depth — even if SQL validation is bypassed, the database itself blocks writes.

How do I prevent the AI from running expensive queries?

Set statement_timeout on the database connection (e.g., SET statement_timeout = '5000' for 5 seconds). Also limit result rows and block queries without WHERE clauses on large tables.

Can I expose multiple databases from one MCP server?

Yes. Create multiple Pool instances and use tool names or parameters to route queries to the correct database. For example: query-analytics, query-users for different databases.

How do I handle database migrations with an MCP server?

Run migrations through your normal workflow (Prisma, Drizzle, raw SQL). After schema changes, the AI will see the updated schema next time it reads the schema resource. No MCP server restart needed.

Is it safe to let the AI run arbitrary SELECT queries?

With proper safeguards (read-only user, statement timeout, result limits), SELECT queries are safe. The main risk is performance — a complex JOIN on large tables can be slow. Use statement_timeout to prevent this.

Can I use this pattern with MySQL or SQLite?

Yes. Replace the pg Pool with mysql2 or better-sqlite3. The MCP tool pattern is the same — validate SQL, execute with parameterized queries, return JSON results. For multi-database MCP architectures, the RapidDev team can help design the right abstraction layer.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.