The 'Missing required parameter' error from the OpenAI node in n8n means the request body is incomplete — a required field like 'model', 'messages', or 'input' is missing or null. Fix this by ensuring all required parameters are filled in the node configuration, validating dynamic expressions resolve to non-empty values, and adding a Code node to verify the payload before sending.
Why OpenAI Returns 'Missing Required Parameter' in n8n
OpenAI's API requires specific parameters for each endpoint. The chat completions endpoint requires 'model' and 'messages'. The embeddings endpoint requires 'model' and 'input'. When any of these are missing, empty, or null, the API returns a 400 Bad Request with a message like 'Missing required parameter: messages'. In n8n, this commonly happens when an expression like {{ $json.user_input }} resolves to undefined because the upstream node did not produce that field, when the model name is left blank, or when the node configuration has been partially filled.
Prerequisites
- A running n8n instance with OpenAI credentials configured
- A workflow using the OpenAI node that produces 'missing required parameter' errors
- Basic understanding of n8n expressions ({{ }} syntax)
Step-by-step guide
Identify the Missing Parameter from the Error Message
Identify the Missing Parameter from the Error Message
The error message from OpenAI always names the missing parameter. Enable 'Continue On Fail' on the OpenAI node and run the workflow. Check the output for the exact error. Common messages include: 'Missing required parameter: messages', 'Missing required parameter: model', 'Missing required parameter: input', or 'you must provide a model parameter'. Note the exact field name — this tells you what to fix. If the error says 'is not valid under any of the given schemas', it means a parameter is present but has the wrong type or format.
Expected result: You have identified the exact parameter that is missing or malformed.
Verify Static Configuration Fields in the OpenAI Node
Verify Static Configuration Fields in the OpenAI Node
Open the OpenAI node and check every required field. For the 'Message a Model' operation, ensure: (1) the Model field has a valid model selected (gpt-4o, gpt-4o-mini, etc.), (2) the Messages section has at least one message with a Role (user or system) and Content filled in. For the 'Classify Text for Moderation' operation, ensure the Input field is not empty. For the 'Generate an Image' operation, ensure the Prompt field is filled. If any field uses an expression, click into it to verify it resolves correctly.
Expected result: All required static fields in the OpenAI node configuration are filled in correctly.
Debug Expressions That Resolve to Empty Values
Debug Expressions That Resolve to Empty Values
The most common cause of missing parameters is expressions like {{ $json.user_message }} that resolve to undefined. This happens when the upstream node does not produce a field with that exact name. To debug: (1) Open the node with the expression. (2) In the expression editor, check the 'Result' preview — if it shows 'undefined' or is empty, the expression is the problem. (3) Check the upstream node's output by running it first and examining the Output panel. (4) Ensure the field name in the expression exactly matches the upstream output, including case sensitivity.
1// Common expression issues and fixes:23// WRONG: Field name mismatch4// Expression: {{ $json.userMessage }}5// Actual field: $json.user_message (underscore, not camelCase)67// WRONG: Nested field without correct path8// Expression: {{ $json.message }}9// Actual structure: $json.body.message10// Fix: {{ $json.body.message }}1112// WRONG: Referencing wrong node13// Expression: {{ $json.text }}14// This references the PREVIOUS node. To reference a specific node:15// {{ $('Webhook').item.json.body.text }}1617// SAFE: With fallback value18// {{ $json.user_message || 'No message provided' }}Expected result: All expressions in the OpenAI node resolve to non-empty values.
Add a Pre-Flight Validation Code Node
Add a Pre-Flight Validation Code Node
Add a Code node before the OpenAI node to validate that all required data exists. This catches missing fields before they reach the API, giving you clearer error messages and the ability to provide fallback values. Set the Code node to 'Run Once for Each Item'. Check each field your OpenAI node needs and either provide defaults or throw descriptive errors.
1const item = $input.item;2const json = item.json;34// Validate required fields for chat completion5const userMessage = json.user_message || json.text || json.body?.message || json.query;6const systemPrompt = json.system_prompt || 'You are a helpful assistant.';7const model = json.model || 'gpt-4o-mini';89if (!userMessage || userMessage.trim() === '') {10 throw new Error(11 'Missing user message. Expected field: user_message, text, body.message, or query. ' +12 'Available fields: ' + Object.keys(json).join(', ')13 );14}1516// Pass validated data to the OpenAI node17return [{18 json: {19 validated_message: userMessage.trim(),20 system_prompt: systemPrompt,21 model: model22 }23}];Expected result: The Code node catches missing data before it reaches the OpenAI node, with clear error messages showing which field is missing.
Handle the Messages Array Format Correctly
Handle the Messages Array Format Correctly
If you are using the HTTP Request node to call OpenAI directly (instead of the built-in OpenAI node), you must construct the messages array correctly. Each message must have a 'role' (system, user, or assistant) and 'content' (non-empty string). A common mistake is passing the messages as a string instead of an array, or omitting the content field. Use a Code node to build the payload.
1// Code node: Build OpenAI API payload2const userMessage = $json.validated_message;3const systemPrompt = $json.system_prompt || 'You are a helpful assistant.';45const payload = {6 model: 'gpt-4o',7 messages: [8 {9 role: 'system',10 content: systemPrompt11 },12 {13 role: 'user',14 content: userMessage15 }16 ],17 max_tokens: 2048,18 temperature: 0.719};2021// Validate the payload22if (!payload.messages.every(m => m.role && m.content)) {23 throw new Error('Invalid messages array: every message must have role and content');24}2526return [{ json: payload }];Expected result: The HTTP Request node receives a correctly formatted payload with all required parameters.
Complete working example
1// Code node: Run Once for Each Item2// Place BEFORE the OpenAI node or HTTP Request node3// Validates and constructs the complete API payload45const item = $input.item;6const json = item.json;78// Extract user message from various possible field names9const userMessage = json.user_message10 || json.text11 || json.message12 || json.query13 || json.prompt14 || json.body?.message15 || json.body?.text16 || '';1718if (!userMessage || userMessage.trim() === '') {19 throw new Error(20 'No user message found. Checked fields: user_message, text, message, query, prompt, body.message, body.text. ' +21 'Received keys: ' + Object.keys(json).join(', ')22 );23}2425// Build system prompt26const systemPrompt = json.system_prompt27 || json.systemPrompt28 || json.instructions29 || 'You are a helpful assistant.';3031// Determine model32const model = json.model || 'gpt-4o-mini';33const validModels = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-3.5-turbo'];34if (!validModels.includes(model) && !model.startsWith('ft:')) {35 throw new Error('Invalid model: ' + model + '. Valid options: ' + validModels.join(', '));36}3738// Build conversation history if provided39const messages = [];40messages.push({ role: 'system', content: systemPrompt });4142if (json.conversation_history && Array.isArray(json.conversation_history)) {43 for (const msg of json.conversation_history) {44 if (msg.role && msg.content) {45 messages.push({ role: msg.role, content: msg.content });46 }47 }48}4950messages.push({ role: 'user', content: userMessage.trim() });5152// Return validated payload53return [{54 json: {55 validated_message: userMessage.trim(),56 system_prompt: systemPrompt,57 model: model,58 messages: messages,59 max_tokens: json.max_tokens || 204860 }61}];Common mistakes when fixing Missing Required Parameter Errors from the OpenAI Node in n8n
Why it's a problem: Using $json.field when the data is nested inside body or data objects from a webhook
How to avoid: Webhook payloads typically arrive as $json.body.field or $json.body.data.field. Check the Webhook node output to see the exact structure and adjust your expression path.
Why it's a problem: Leaving the Model field empty in the OpenAI node when using a dynamic expression that fails
How to avoid: Set a static model as the default and only use expressions if you genuinely need to switch models dynamically. If using an expression, add a fallback: {{ $json.model || 'gpt-4o-mini' }}.
Why it's a problem: Passing messages as a JSON string instead of an array when using the HTTP Request node
How to avoid: Ensure the messages field in your HTTP Request body is an actual JSON array, not a stringified version. Use a Code node to construct the payload as a JavaScript object.
Why it's a problem: Not accounting for items with different field structures when processing multiple items in a batch
How to avoid: Different items from a SplitInBatches or other source may have different field names. Validate each item individually in a Code node set to 'Run Once for Each Item'.
Best practices
- Always test expressions in the expression editor before saving — check the Result preview for undefined values
- Use fallback values in expressions: {{ $json.field || 'default' }} to prevent empty parameters
- Add a validation Code node before LLM nodes to catch missing data with clear error messages
- Reference specific upstream nodes by name: {{ $('Webhook').item.json.body.text }} instead of {{ $json.text }}
- Use the 'Execute step' button on individual nodes during development to see exact inputs and outputs
- Keep the OpenAI node model field set to a static value rather than a dynamic expression unless you genuinely need model switching
- When using HTTP Request to call OpenAI directly, always validate the messages array structure in a Code node first
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm getting a 'Missing required parameter' error from the OpenAI node in n8n. The error says 'messages' or 'model' is missing even though I have them configured. How do I debug expressions that resolve to empty values and add validation before the OpenAI node?
Fix my n8n workflow: the OpenAI node returns 'Missing required parameter: messages'. I'm using an expression {{ $json.user_message }} that might be undefined. Add a Code node before the OpenAI node that validates all required fields exist and provides clear error messages if they don't.
Frequently asked questions
Which parameters are required for the OpenAI chat completions endpoint?
The required parameters are 'model' (e.g., 'gpt-4o') and 'messages' (an array of message objects, each with a 'role' and 'content' field). All other parameters like temperature, max_tokens, and top_p are optional.
Why does my expression show a value in the editor but fail at runtime?
The expression editor shows results based on the most recent execution data. If upstream nodes have not been executed yet or if the input data changes between runs, the expression may resolve differently. Always test by running the full workflow, not just previewing expressions.
Can I pass an empty system prompt to OpenAI?
Technically yes — the system message is optional. However, if your n8n node is configured to include a system message, the content field must be a non-empty string. Either provide content or remove the system message entirely.
What does 'is not valid under any of the given schemas' mean?
This OpenAI error means a parameter is present but has the wrong type or format. For example, passing a number where a string is expected, or sending a messages array where items are missing the required 'role' field. Check that all values match the expected types.
How do I pass conversation history as the messages array in n8n?
Use a Code node to construct the messages array. Start with a system message, then add historical messages from your database or memory node, and finally add the current user message. Pass the complete array to the OpenAI node or HTTP Request node.
Can RapidDev help build reliable OpenAI integrations in n8n?
Yes. RapidDev's team can architect n8n workflows with proper input validation, error handling, and payload construction for OpenAI and other LLM APIs, ensuring your AI features work reliably in production.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation