Prompts in MCP servers are reusable message templates with optional arguments that AI clients can invoke to start structured interactions. Register prompts with server.registerPrompt() in TypeScript or @mcp.prompt() in Python, define argument schemas, and return pre-formatted message arrays that guide the AI's behavior.
Defining Prompts in Your MCP Server
Prompts are the third core primitive in MCP, alongside tools and resources. While tools let the AI perform actions and resources provide data, prompts are server-defined message templates that guide how the AI approaches specific tasks. They are user-controlled — the AI client presents available prompts to the user, who selects one and fills in arguments.
Prompts are ideal for code review workflows, debugging templates, report generators, and any interaction pattern you want to standardize. This tutorial covers prompt registration in both TypeScript and Python, including argument schemas, multi-message sequences, and embedded resource references.
Prerequisites
- MCP TypeScript SDK or Python SDK installed and configured
- A running MCP server with transport set up
- Familiarity with MCP tools and resources (prompts build on these concepts)
- Basic understanding of Zod schema validation (for TypeScript)
Step-by-step guide
Register a basic prompt with arguments
Register a basic prompt with arguments
Use server.registerPrompt() to define a prompt template. The first argument is the prompt name, the second is a config object with a description and argsSchema using Zod, and the third is a handler that receives the argument values and returns a messages array. Each message has a role (user or assistant) and content.
1// TypeScript2import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";3import { z } from "zod";45const server = new McpServer({ name: "prompt-server", version: "1.0.0" });67server.registerPrompt("review-code", {8 description: "Generate a code review for the given code snippet",9 argsSchema: {10 code: z.string().describe("The code to review"),11 language: z.string().default("typescript").describe("Programming language"),12 },13}, ({ code, language }) => ({14 messages: [{15 role: "user",16 content: {17 type: "text",18 text: `Review this ${language} code for bugs, security issues, and style improvements:\n\n\`\`\`${language}\n${code}\n\`\`\`\n\nProvide specific line-by-line feedback.`,19 },20 }],21}));Expected result: The prompt appears in the prompts/list response and can be invoked by MCP clients.
Create multi-message prompt sequences
Create multi-message prompt sequences
Prompts can return multiple messages to set up a conversation flow. Use a system-like setup by having an initial assistant message followed by a user message. This is useful for establishing context before the user's actual request.
1// TypeScript2server.registerPrompt("debug-error", {3 description: "Help debug an error with structured analysis",4 argsSchema: {5 error: z.string().describe("The error message or stack trace"),6 context: z.string().optional().describe("Additional context about when the error occurs"),7 },8}, ({ error, context }) => ({9 messages: [10 {11 role: "user",12 content: {13 type: "text",14 text: `I need help debugging this error:\n\n${error}${context ? `\n\nContext: ${context}` : ""}\n\nPlease analyze:\n1. What is the root cause?\n2. What are the likely fixes?\n3. How can I prevent this in the future?`,15 },16 },17 ],18}));Expected result: The client receives a structured conversation starter that guides the AI to provide systematic debug analysis.
Embed resource references in prompts
Embed resource references in prompts
Prompts can include embedded resource content, combining the prompt primitive with the resource primitive. Use type: 'resource' in the message content to reference a resource URI. The client resolves the resource and includes its content in the conversation, giving the AI additional context alongside the prompt template.
1// TypeScript2server.registerPrompt("analyze-with-schema", {3 description: "Analyze code against the project schema",4 argsSchema: {5 code: z.string().describe("Code to analyze"),6 },7}, ({ code }) => ({8 messages: [9 {10 role: "user",11 content: {12 type: "resource",13 resource: {14 uri: "db://schema",15 text: "(Schema will be loaded from resource)",16 mimeType: "text/plain",17 },18 },19 },20 {21 role: "user",22 content: {23 type: "text",24 text: `Given the database schema above, review this code for correctness:\n\n${code}`,25 },26 },27 ],28}));Expected result: The AI receives both the database schema and the code to analyze in a single structured conversation.
Define prompts in Python with FastMCP
Define prompts in Python with FastMCP
In Python, use the @mcp.prompt() decorator. The function's docstring becomes the prompt description. Arguments are defined as function parameters with type hints. Return a string for a simple single-message prompt, or return a list of message dictionaries for multi-message sequences.
1# Python2from mcp.server.fastmcp import FastMCP34mcp = FastMCP("prompt-server")56@mcp.prompt()7async def review_code(code: str, language: str = "python") -> str:8 """Generate a code review for the given code snippet."""9 return f"""Review this {language} code for bugs, security issues, and style:1011```{language}12{code}13```1415Provide specific, actionable feedback."""1617@mcp.prompt()18async def generate_tests(code: str, framework: str = "pytest") -> str:19 """Generate unit tests for the given code."""20 return f"""Write comprehensive {framework} tests for this code:2122```python23{code}24```2526Cover edge cases, error conditions, and happy paths."""Expected result: Both prompts are registered and available to clients connecting to the Python MCP server.
Test prompts with the MCP Inspector
Test prompts with the MCP Inspector
Use the MCP Inspector to verify your prompts are registered correctly and return the expected messages. Start your server, connect the Inspector, navigate to the Prompts tab, and invoke each prompt with test arguments. Verify the returned messages contain the correct structure and interpolated values. For teams building production MCP servers, RapidDev can help establish testing pipelines that cover all prompt variations.
1# Start your server for testing2npx tsc && node dist/index.js34# In another terminal, use the Inspector5npx @modelcontextprotocol/inspectorExpected result: The Inspector shows all registered prompts, lets you fill in arguments, and displays the generated message array.
Complete working example
1import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";2import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";3import { z } from "zod";45const server = new McpServer({6 name: "prompts-demo-server",7 version: "1.0.0",8});910// Code review prompt11server.registerPrompt("review-code", {12 description: "Generate a structured code review",13 argsSchema: {14 code: z.string().describe("The code to review"),15 language: z.string().default("typescript").describe("Programming language"),16 focus: z.enum(["bugs", "security", "performance", "all"]).default("all")17 .describe("Review focus area"),18 },19}, ({ code, language, focus }) => ({20 messages: [{21 role: "user",22 content: {23 type: "text",24 text: `Review this ${language} code with focus on ${focus}:\n\n\`\`\`${language}\n${code}\n\`\`\`\n\nFor each issue found, provide:\n- Line number\n- Severity (critical/warning/info)\n- Description\n- Suggested fix`,25 },26 }],27}));2829// Error debugging prompt30server.registerPrompt("debug-error", {31 description: "Structured error debugging assistant",32 argsSchema: {33 error: z.string().describe("Error message or stack trace"),34 context: z.string().optional().describe("When/where the error occurs"),35 },36}, ({ error, context }) => ({37 messages: [{38 role: "user",39 content: {40 type: "text",41 text: `Debug this error:\n\n${error}${context ? `\n\nContext: ${context}` : ""}\n\nAnalyze:\n1. Root cause\n2. Step-by-step fix\n3. Prevention strategy`,42 },43 }],44}));4546// Test generation prompt47server.registerPrompt("generate-tests", {48 description: "Generate unit tests for code",49 argsSchema: {50 code: z.string().describe("Code to test"),51 framework: z.enum(["jest", "vitest", "mocha"]).default("jest")52 .describe("Test framework"),53 },54}, ({ code, framework }) => ({55 messages: [{56 role: "user",57 content: {58 type: "text",59 text: `Write comprehensive ${framework} unit tests for:\n\n\`\`\`typescript\n${code}\n\`\`\`\n\nInclude:\n- Happy path tests\n- Edge cases\n- Error handling tests\n- Mock setup for external dependencies`,60 },61 }],62}));6364const transport = new StdioServerTransport();65await server.connect(transport);66console.error("Prompts demo server running on stdio");Common mistakes when defining prompt templates in an MCP server
Why it's a problem: Confusing prompts with tools
How to avoid: Prompts are message templates invoked by users through the client UI. Tools are functions invoked by the AI model. Use prompts for structured interaction patterns, tools for actions.
Why it's a problem: Referencing non-existent resource URIs in prompts
How to avoid: If your prompt embeds a resource reference, make sure that resource is registered on the same server. The client resolves resource URIs before sending to the AI.
Why it's a problem: Making prompts too generic
How to avoid: Specific prompts with clear structure produce better AI responses. Instead of 'help with code', create 'review-code', 'debug-error', and 'generate-tests' separately.
Why it's a problem: Forgetting to describe prompt arguments
How to avoid: Add .describe() to every Zod parameter. Clients display these descriptions to users who need to fill in the arguments.
Best practices
- Name prompts as action verbs: review-code, debug-error, generate-tests, explain-architecture
- Write clear descriptions for both the prompt and each argument — users see these in the client UI
- Use structured output instructions in your prompt text (numbered lists, specific sections to fill)
- Embed resource references when the prompt needs project context like schemas or configuration
- Keep prompts focused on one task — create multiple prompts instead of one mega-prompt
- Test all prompts with the MCP Inspector using various argument combinations
- Use default values for optional arguments to reduce friction for users
- Version your prompts by updating the prompt text when you improve the template
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm building an MCP server in TypeScript. Show me how to register prompts with server.registerPrompt() that take arguments via Zod schemas and return structured message arrays. Include a multi-message example with embedded resource references.
Add an MCP prompt called [prompt-name] to my server. It should accept [describe arguments] and return a message that instructs the AI to [describe task]. Use server.registerPrompt() with a Zod argsSchema.
Frequently asked questions
What is the difference between prompts and tools in MCP?
Prompts are user-initiated message templates — the user picks a prompt from a menu and fills in arguments. Tools are AI-initiated function calls — the AI decides to call a tool based on the conversation. Prompts guide the conversation structure; tools perform actions.
Can prompts call tools or read resources?
Prompts themselves do not call tools or read resources. However, prompts can embed resource references (type: 'resource') that the client resolves before sending. The AI response to a prompt may then decide to use tools based on the conversation.
How does the user select which prompt to use?
The MCP client (like Claude Desktop) calls prompts/list to get available prompts and displays them to the user, often as a menu or slash-command list. The user selects a prompt, fills in required arguments, and the client calls prompts/get to retrieve the messages.
Can I update prompts without restarting the server?
You can register new prompts at runtime and send a notifications/prompts/list_changed notification. However, existing prompt handlers cannot be modified without re-registering them with the same name.
Should I use prompts or just tell users to type specific messages?
Prompts are better because they provide a structured interface with validated arguments, consistent formatting, and discoverability through the client UI. They reduce errors compared to asking users to remember exact message formats.
Can prompts return assistant messages?
Yes. The messages array can include both user and assistant role messages. This is useful for few-shot prompting where you provide example assistant responses to guide the AI's output format. For complex prompt engineering strategies, the RapidDev team can help design optimal prompt structures.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation