Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Fix Missing Required Parameter Errors from the OpenAI Node in n8n

The 'Missing required parameter' error from the OpenAI node in n8n means the request body is incomplete — a required field like 'model', 'messages', or 'input' is missing or null. Fix this by ensuring all required parameters are filled in the node configuration, validating dynamic expressions resolve to non-empty values, and adding a Code node to verify the payload before sending.

What you'll learn

  • Which parameters are required for each OpenAI API endpoint used by n8n
  • How to debug expressions that resolve to empty values
  • How to add pre-flight validation before sending requests to OpenAI
  • How to handle optional parameters that OpenAI may require in specific scenarios
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced9 min read15-20 minutesn8n 1.20+ with OpenAI node or HTTP Request node calling OpenAI APIMarch 2026RapidDev Engineering Team
TL;DR

The 'Missing required parameter' error from the OpenAI node in n8n means the request body is incomplete — a required field like 'model', 'messages', or 'input' is missing or null. Fix this by ensuring all required parameters are filled in the node configuration, validating dynamic expressions resolve to non-empty values, and adding a Code node to verify the payload before sending.

Why OpenAI Returns 'Missing Required Parameter' in n8n

OpenAI's API requires specific parameters for each endpoint. The chat completions endpoint requires 'model' and 'messages'. The embeddings endpoint requires 'model' and 'input'. When any of these are missing, empty, or null, the API returns a 400 Bad Request with a message like 'Missing required parameter: messages'. In n8n, this commonly happens when an expression like {{ $json.user_input }} resolves to undefined because the upstream node did not produce that field, when the model name is left blank, or when the node configuration has been partially filled.

Prerequisites

  • A running n8n instance with OpenAI credentials configured
  • A workflow using the OpenAI node that produces 'missing required parameter' errors
  • Basic understanding of n8n expressions ({{ }} syntax)

Step-by-step guide

1

Identify the Missing Parameter from the Error Message

The error message from OpenAI always names the missing parameter. Enable 'Continue On Fail' on the OpenAI node and run the workflow. Check the output for the exact error. Common messages include: 'Missing required parameter: messages', 'Missing required parameter: model', 'Missing required parameter: input', or 'you must provide a model parameter'. Note the exact field name — this tells you what to fix. If the error says 'is not valid under any of the given schemas', it means a parameter is present but has the wrong type or format.

Expected result: You have identified the exact parameter that is missing or malformed.

2

Verify Static Configuration Fields in the OpenAI Node

Open the OpenAI node and check every required field. For the 'Message a Model' operation, ensure: (1) the Model field has a valid model selected (gpt-4o, gpt-4o-mini, etc.), (2) the Messages section has at least one message with a Role (user or system) and Content filled in. For the 'Classify Text for Moderation' operation, ensure the Input field is not empty. For the 'Generate an Image' operation, ensure the Prompt field is filled. If any field uses an expression, click into it to verify it resolves correctly.

Expected result: All required static fields in the OpenAI node configuration are filled in correctly.

3

Debug Expressions That Resolve to Empty Values

The most common cause of missing parameters is expressions like {{ $json.user_message }} that resolve to undefined. This happens when the upstream node does not produce a field with that exact name. To debug: (1) Open the node with the expression. (2) In the expression editor, check the 'Result' preview — if it shows 'undefined' or is empty, the expression is the problem. (3) Check the upstream node's output by running it first and examining the Output panel. (4) Ensure the field name in the expression exactly matches the upstream output, including case sensitivity.

typescript
1// Common expression issues and fixes:
2
3// WRONG: Field name mismatch
4// Expression: {{ $json.userMessage }}
5// Actual field: $json.user_message (underscore, not camelCase)
6
7// WRONG: Nested field without correct path
8// Expression: {{ $json.message }}
9// Actual structure: $json.body.message
10// Fix: {{ $json.body.message }}
11
12// WRONG: Referencing wrong node
13// Expression: {{ $json.text }}
14// This references the PREVIOUS node. To reference a specific node:
15// {{ $('Webhook').item.json.body.text }}
16
17// SAFE: With fallback value
18// {{ $json.user_message || 'No message provided' }}

Expected result: All expressions in the OpenAI node resolve to non-empty values.

4

Add a Pre-Flight Validation Code Node

Add a Code node before the OpenAI node to validate that all required data exists. This catches missing fields before they reach the API, giving you clearer error messages and the ability to provide fallback values. Set the Code node to 'Run Once for Each Item'. Check each field your OpenAI node needs and either provide defaults or throw descriptive errors.

typescript
1const item = $input.item;
2const json = item.json;
3
4// Validate required fields for chat completion
5const userMessage = json.user_message || json.text || json.body?.message || json.query;
6const systemPrompt = json.system_prompt || 'You are a helpful assistant.';
7const model = json.model || 'gpt-4o-mini';
8
9if (!userMessage || userMessage.trim() === '') {
10 throw new Error(
11 'Missing user message. Expected field: user_message, text, body.message, or query. ' +
12 'Available fields: ' + Object.keys(json).join(', ')
13 );
14}
15
16// Pass validated data to the OpenAI node
17return [{
18 json: {
19 validated_message: userMessage.trim(),
20 system_prompt: systemPrompt,
21 model: model
22 }
23}];

Expected result: The Code node catches missing data before it reaches the OpenAI node, with clear error messages showing which field is missing.

5

Handle the Messages Array Format Correctly

If you are using the HTTP Request node to call OpenAI directly (instead of the built-in OpenAI node), you must construct the messages array correctly. Each message must have a 'role' (system, user, or assistant) and 'content' (non-empty string). A common mistake is passing the messages as a string instead of an array, or omitting the content field. Use a Code node to build the payload.

typescript
1// Code node: Build OpenAI API payload
2const userMessage = $json.validated_message;
3const systemPrompt = $json.system_prompt || 'You are a helpful assistant.';
4
5const payload = {
6 model: 'gpt-4o',
7 messages: [
8 {
9 role: 'system',
10 content: systemPrompt
11 },
12 {
13 role: 'user',
14 content: userMessage
15 }
16 ],
17 max_tokens: 2048,
18 temperature: 0.7
19};
20
21// Validate the payload
22if (!payload.messages.every(m => m.role && m.content)) {
23 throw new Error('Invalid messages array: every message must have role and content');
24}
25
26return [{ json: payload }];

Expected result: The HTTP Request node receives a correctly formatted payload with all required parameters.

Complete working example

openai-payload-validator.js
1// Code node: Run Once for Each Item
2// Place BEFORE the OpenAI node or HTTP Request node
3// Validates and constructs the complete API payload
4
5const item = $input.item;
6const json = item.json;
7
8// Extract user message from various possible field names
9const userMessage = json.user_message
10 || json.text
11 || json.message
12 || json.query
13 || json.prompt
14 || json.body?.message
15 || json.body?.text
16 || '';
17
18if (!userMessage || userMessage.trim() === '') {
19 throw new Error(
20 'No user message found. Checked fields: user_message, text, message, query, prompt, body.message, body.text. ' +
21 'Received keys: ' + Object.keys(json).join(', ')
22 );
23}
24
25// Build system prompt
26const systemPrompt = json.system_prompt
27 || json.systemPrompt
28 || json.instructions
29 || 'You are a helpful assistant.';
30
31// Determine model
32const model = json.model || 'gpt-4o-mini';
33const validModels = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-3.5-turbo'];
34if (!validModels.includes(model) && !model.startsWith('ft:')) {
35 throw new Error('Invalid model: ' + model + '. Valid options: ' + validModels.join(', '));
36}
37
38// Build conversation history if provided
39const messages = [];
40messages.push({ role: 'system', content: systemPrompt });
41
42if (json.conversation_history && Array.isArray(json.conversation_history)) {
43 for (const msg of json.conversation_history) {
44 if (msg.role && msg.content) {
45 messages.push({ role: msg.role, content: msg.content });
46 }
47 }
48}
49
50messages.push({ role: 'user', content: userMessage.trim() });
51
52// Return validated payload
53return [{
54 json: {
55 validated_message: userMessage.trim(),
56 system_prompt: systemPrompt,
57 model: model,
58 messages: messages,
59 max_tokens: json.max_tokens || 2048
60 }
61}];

Common mistakes when fixing Missing Required Parameter Errors from the OpenAI Node in n8n

Why it's a problem: Using $json.field when the data is nested inside body or data objects from a webhook

How to avoid: Webhook payloads typically arrive as $json.body.field or $json.body.data.field. Check the Webhook node output to see the exact structure and adjust your expression path.

Why it's a problem: Leaving the Model field empty in the OpenAI node when using a dynamic expression that fails

How to avoid: Set a static model as the default and only use expressions if you genuinely need to switch models dynamically. If using an expression, add a fallback: {{ $json.model || 'gpt-4o-mini' }}.

Why it's a problem: Passing messages as a JSON string instead of an array when using the HTTP Request node

How to avoid: Ensure the messages field in your HTTP Request body is an actual JSON array, not a stringified version. Use a Code node to construct the payload as a JavaScript object.

Why it's a problem: Not accounting for items with different field structures when processing multiple items in a batch

How to avoid: Different items from a SplitInBatches or other source may have different field names. Validate each item individually in a Code node set to 'Run Once for Each Item'.

Best practices

  • Always test expressions in the expression editor before saving — check the Result preview for undefined values
  • Use fallback values in expressions: {{ $json.field || 'default' }} to prevent empty parameters
  • Add a validation Code node before LLM nodes to catch missing data with clear error messages
  • Reference specific upstream nodes by name: {{ $('Webhook').item.json.body.text }} instead of {{ $json.text }}
  • Use the 'Execute step' button on individual nodes during development to see exact inputs and outputs
  • Keep the OpenAI node model field set to a static value rather than a dynamic expression unless you genuinely need model switching
  • When using HTTP Request to call OpenAI directly, always validate the messages array structure in a Code node first

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm getting a 'Missing required parameter' error from the OpenAI node in n8n. The error says 'messages' or 'model' is missing even though I have them configured. How do I debug expressions that resolve to empty values and add validation before the OpenAI node?

n8n Prompt

Fix my n8n workflow: the OpenAI node returns 'Missing required parameter: messages'. I'm using an expression {{ $json.user_message }} that might be undefined. Add a Code node before the OpenAI node that validates all required fields exist and provides clear error messages if they don't.

Frequently asked questions

Which parameters are required for the OpenAI chat completions endpoint?

The required parameters are 'model' (e.g., 'gpt-4o') and 'messages' (an array of message objects, each with a 'role' and 'content' field). All other parameters like temperature, max_tokens, and top_p are optional.

Why does my expression show a value in the editor but fail at runtime?

The expression editor shows results based on the most recent execution data. If upstream nodes have not been executed yet or if the input data changes between runs, the expression may resolve differently. Always test by running the full workflow, not just previewing expressions.

Can I pass an empty system prompt to OpenAI?

Technically yes — the system message is optional. However, if your n8n node is configured to include a system message, the content field must be a non-empty string. Either provide content or remove the system message entirely.

What does 'is not valid under any of the given schemas' mean?

This OpenAI error means a parameter is present but has the wrong type or format. For example, passing a number where a string is expected, or sending a messages array where items are missing the required 'role' field. Check that all values match the expected types.

How do I pass conversation history as the messages array in n8n?

Use a Code node to construct the messages array. Start with a system message, then add historical messages from your database or memory node, and finally add the current user message. Pass the complete array to the OpenAI node or HTTP Request node.

Can RapidDev help build reliable OpenAI integrations in n8n?

Yes. RapidDev's team can architect n8n workflows with proper input validation, error handling, and payload construction for OpenAI and other LLM APIs, ensuring your AI features work reliably in production.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.