The 'Cannot read property choices of undefined' error in the n8n OpenAI node means the API returned an empty or malformed response instead of the expected completion object. Fix this by validating the API key and model name, handling empty responses in a Code node, checking for content filter blocks, and adding retry logic for transient API failures.
Why the OpenAI Node Returns 'Cannot Read Property choices of Undefined'
This error occurs when n8n tries to access response.choices[0].message.content but the response object from OpenAI is either undefined, null, or missing the choices array entirely. This typically happens when the API returns an error response that n8n does not parse correctly, when the model name is invalid, when content filters block the output, or when the request times out and returns an incomplete response. The OpenAI node expects a specific response structure, and any deviation triggers this JavaScript TypeError.
Prerequisites
- A running n8n instance (self-hosted or n8n Cloud)
- OpenAI API credentials configured in n8n
- A workflow using the OpenAI node or OpenAI Chat Model sub-node
- Basic understanding of n8n expressions and the Code node
Step-by-step guide
Verify Your OpenAI API Key and Model Name
Verify Your OpenAI API Key and Model Name
The most common cause of an empty response is an invalid or expired API key, or a model name that does not exist. Go to the OpenAI node in your workflow, click on the credential, and verify the API key is correct. Then check the model field — if you are using a model like 'gpt-4' but your API account only has GPT-3.5 access, the API may return an error that n8n interprets as an empty response. Test the key by making a simple request via the OpenAI playground or a curl command.
Expected result: The API key is confirmed valid and the model name matches one available in your OpenAI account.
Enable Continue On Fail and Inspect the Raw Error
Enable Continue On Fail and Inspect the Raw Error
Click the OpenAI node, go to Settings, and enable 'Continue On Fail'. Run the workflow again. Instead of crashing, the node will output the error object, which contains the actual API error message. Look at the output JSON for fields like error.message, error.code, or error.type. Common values include 'model_not_found', 'insufficient_quota', 'content_policy_violation', and 'server_error'. This raw error tells you exactly why the choices array was missing.
Expected result: The workflow completes and the OpenAI node outputs the error details instead of crashing.
Add a Code Node to Handle Empty or Malformed Responses
Add a Code Node to Handle Empty or Malformed Responses
After the OpenAI node, add a Code node that validates the response structure before downstream nodes try to use it. This prevents the TypeError from propagating through your workflow. The Code node checks whether the expected fields exist and provides a fallback value if they do not. Set the mode to 'Run Once for Each Item'.
1const items = $input.all();2const results = [];34for (const item of items) {5 const json = item.json;67 // Check if this is an error from Continue On Fail8 if (json.error) {9 results.push({10 json: {11 text: '',12 status: 'error',13 error_message: json.error.message || 'Unknown OpenAI error',14 error_code: json.error.code || 'unknown'15 }16 });17 continue;18 }1920 // Safely extract the response text21 const text = json?.message?.content22 || json?.text23 || json?.choices?.[0]?.message?.content24 || '';2526 results.push({27 json: {28 text: text,29 status: text ? 'success' : 'empty_response',30 model: json.model || 'unknown'31 }32 });33}3435return results;Expected result: The Code node safely extracts the response text or provides a clear error status, preventing downstream TypeError crashes.
Check for Content Filter Blocks
Check for Content Filter Blocks
OpenAI's content moderation can block responses entirely, returning a response object with choices[0].finish_reason set to 'content_filter' and no message content. If your prompts might trigger content filters (medical, legal, or sensitive topics), you need to detect this case. Add logic in your Code node to check the finish_reason field and handle filtered responses gracefully by returning a user-friendly message instead of an empty string.
1const json = $json;23// Check for content filter4const finishReason = json?.choices?.[0]?.finish_reason || '';5if (finishReason === 'content_filter') {6 return [{7 json: {8 text: 'The AI model could not generate a response for this input due to content policy restrictions.',9 status: 'content_filtered',10 original_finish_reason: finishReason11 }12 }];13}1415return [$input.item];Expected result: Content-filtered responses return a meaningful fallback message instead of crashing with an undefined error.
Add Retry Logic for Transient API Failures
Add Retry Logic for Transient API Failures
OpenAI API occasionally returns 500 or 503 errors during high load, which can cause empty response objects. Configure the OpenAI node's retry settings: click the node, go to Settings, and set 'Retry On Fail' to true with a max of 3 retries and a wait time of 2000ms between retries. This handles transient failures automatically without custom code. For more complex retry patterns, wrap the OpenAI node in a loop using the SplitInBatches node with a batch size of 1.
Expected result: Transient API errors are automatically retried up to 3 times before the node reports a failure.
Complete working example
1// Code node: Run Once for Each Item2// Place AFTER the OpenAI node (with Continue On Fail enabled on the OpenAI node)34const item = $input.item;5const json = item.json;67// 1. Handle error responses from Continue On Fail8if (json.error) {9 return [{10 json: {11 text: '',12 status: 'api_error',13 error_message: json.error.message || 'OpenAI returned an error',14 error_code: json.error.code || 'unknown',15 error_type: json.error.type || 'unknown',16 should_retry: ['server_error', 'rate_limit_exceeded'].includes(json.error.type),17 timestamp: new Date().toISOString()18 }19 }];20}2122// 2. Handle content filter blocks23const finishReason = json?.choices?.[0]?.finish_reason || json?.finish_reason || '';24if (finishReason === 'content_filter') {25 return [{26 json: {27 text: 'Response blocked by content policy.',28 status: 'content_filtered',29 should_retry: false,30 timestamp: new Date().toISOString()31 }32 }];33}3435// 3. Safely extract response text from various response formats36const text = json?.message?.content37 || json?.text38 || json?.output39 || json?.choices?.[0]?.message?.content40 || json?.choices?.[0]?.text41 || '';4243// 4. Handle empty responses44if (!text || text.trim() === '') {45 return [{46 json: {47 text: '',48 status: 'empty_response',49 raw_response_keys: Object.keys(json),50 should_retry: true,51 timestamp: new Date().toISOString()52 }53 }];54}5556// 5. Return successful response57return [{58 json: {59 text: text,60 status: 'success',61 model: json.model || 'unknown',62 finish_reason: finishReason || 'stop',63 usage: json.usage || null,64 should_retry: false,65 timestamp: new Date().toISOString()66 }67}];Common mistakes when fixing Cannot Read Property 'choices' of Undefined in the OpenAI Node
Why it's a problem: Using a model name that does not exist or has been deprecated (e.g., 'gpt-4-32k' without access)
How to avoid: Check the OpenAI models endpoint or your dashboard to see which models are available on your plan. Use gpt-4o or gpt-4o-mini for most use cases.
Why it's a problem: Not enabling 'Continue On Fail', so the entire workflow crashes with no debug info
How to avoid: Enable 'Continue On Fail' on the OpenAI node in Settings, then add an IF node to check for errors in the output.
Why it's a problem: Assuming the OpenAI response always has the same JSON structure
How to avoid: OpenAI node response format varies between node versions and API versions. Always use optional chaining (json?.choices?.[0]) and check multiple possible field names.
Why it's a problem: Ignoring the finish_reason field, which may indicate the response was incomplete or filtered
How to avoid: Check finish_reason in your validation Code node. Values of 'length' mean the response was cut off; 'content_filter' means it was blocked.
Best practices
- Always enable 'Continue On Fail' on OpenAI nodes in production workflows to capture error details instead of crashing
- Add a Code node after every OpenAI node to validate the response structure before passing data downstream
- Set 'Retry On Fail' with 3 retries and 2-second delays to handle transient 500/503 API errors
- Use specific model names (gpt-4o, gpt-4o-mini) instead of aliases that may be deprecated
- Check your OpenAI billing dashboard to ensure you have not exceeded your usage quota
- Log all empty or error responses to a database or sheet for pattern analysis
- Test with edge-case prompts (very long, multilingual, special characters) during development
- Set a reasonable max_tokens value to prevent incomplete responses due to context limits
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm getting a 'Cannot read property choices of undefined' error in the n8n OpenAI node. The workflow crashes when OpenAI returns an empty or error response. How can I add defensive error handling, check for content filter blocks, and set up retry logic in n8n?
Fix my n8n workflow: the OpenAI node throws 'Cannot read property choices of undefined'. Add a Code node after the OpenAI node that safely extracts the response text, handles error responses when Continue On Fail is enabled, and detects content filter blocks.
Frequently asked questions
What exactly causes 'Cannot read property choices of undefined'?
This JavaScript TypeError occurs when the OpenAI node tries to read response.choices[0].message.content, but the response object is undefined or null. It means the API did not return the expected JSON structure — usually due to an error response, timeout, or content filter block.
Does this error mean my OpenAI API key is wrong?
Not necessarily. While an invalid key is one cause, this error can also occur with a valid key if you exceed your quota, use an unavailable model, trigger content filters, or experience a transient API outage. Enable Continue On Fail to see the actual error details.
Can I prevent this error by using the HTTP Request node instead of the OpenAI node?
Yes. The HTTP Request node gives you full control over the request and response handling. You can call the OpenAI API directly and parse the response in a Code node with proper null checks. However, you lose the convenience of the OpenAI node's built-in configuration.
Why does this error happen intermittently rather than every time?
Intermittent occurrences usually indicate transient API issues (500/503 errors from OpenAI during high load), rate limiting (429 errors when too many requests hit the API), or specific prompts that trigger content filters while others do not.
How do I know if the error is caused by a content filter block?
Enable Continue On Fail on the OpenAI node and check the output for finish_reason equal to 'content_filter', or an error object with type 'invalid_request_error' and a message mentioning content policy. You can also check the OpenAI Usage dashboard for flagged requests.
Can RapidDev help build robust error handling for my n8n AI workflows?
Yes. RapidDev specializes in building production-grade n8n workflows with comprehensive error handling, retry logic, and monitoring. Their team can audit your existing workflows and add defensive patterns to prevent crashes from API failures.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation