Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Debug Why Your OpenAI Response Is Empty in n8n

An empty OpenAI response in n8n usually means the output field path is wrong, the API returned an error silently swallowed by the node, or the model returned an empty string due to a content filter or malformed prompt. Debug by checking execution data for the raw API response, verifying the output expression path, and testing with a hardcoded prompt to isolate the issue.

What you'll learn

  • How to inspect raw API responses using n8n execution data
  • How to identify content filter blocks that return empty responses instead of errors
  • How to verify expression paths that reference OpenAI node output
  • How to add diagnostic logging to trace data flow through the workflow
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced10 min read20-30 minutesn8n 1.20+ (self-hosted and Cloud)March 2026RapidDev Engineering Team
TL;DR

An empty OpenAI response in n8n usually means the output field path is wrong, the API returned an error silently swallowed by the node, or the model returned an empty string due to a content filter or malformed prompt. Debug by checking execution data for the raw API response, verifying the output expression path, and testing with a hardcoded prompt to isolate the issue.

Tracing Empty OpenAI Responses Through Your n8n Workflow

You set up an OpenAI node in n8n, the workflow runs without errors, but the output is blank. This is one of the most frustrating debugging scenarios because n8n shows a successful execution with no error message. The problem can originate at the API level (content filters, empty choices array), the node configuration level (wrong model, missing messages), or the expression level (referencing the wrong output field). This tutorial walks you through a systematic debugging process to pinpoint and fix the root cause.

Prerequisites

  • A running n8n instance with an OpenAI credential configured
  • A workflow that includes at least one OpenAI Chat Model node or HTTP Request node calling OpenAI
  • Basic familiarity with n8n expressions and the {{ }} syntax
  • Access to the n8n execution history (Settings > Executions)

Step-by-step guide

1

Inspect the raw execution data for the OpenAI node

Open the execution that produced the empty response by going to Executions in the left sidebar and clicking on the relevant run. Click on the OpenAI node in the execution view to see its input and output data. Look at the output panel on the right side. If the output shows an empty JSON object or an empty string in the message.content field, the API did return a response but it was empty. If you see no output data at all, the node may have errored silently. Check the raw response by switching to the JSON view (click the {} icon in the output panel). Look specifically for the choices array: if it is empty or the first choice has an empty content string, the issue is on the OpenAI side.

Expected result: You can see the full API response object and identify whether the content field is empty, missing, or contains an error message.

2

Test with a hardcoded simple prompt to isolate the issue

Replace the dynamic expression in your OpenAI node's prompt field with a simple hardcoded string like 'Say hello world'. Run the workflow manually. If this returns a valid response, the problem is with your dynamic input, not the OpenAI configuration. If even the hardcoded prompt returns empty, the issue is with your API key, model selection, or account status. This test eliminates the entire upstream workflow from the debugging process and focuses on the OpenAI node itself. After confirming the hardcoded prompt works, gradually reintroduce dynamic elements to identify which expression causes the empty response.

Expected result: The hardcoded prompt returns 'Hello world' or similar, confirming the OpenAI node itself works correctly.

3

Verify the expression path used to read the OpenAI output

In the node that reads the OpenAI response, check the expression you are using. Common mistakes include referencing $json.text when the actual field is $json.message.content, or using $('OpenAI').first().json.text when the output structure uses a different key. Click on the downstream node, open the expression editor, and use the variable selector on the left to browse the actual output structure of the OpenAI node. The correct path for the OpenAI Chat Model sub-node output is typically $json.output or $json.text depending on how it is connected. For the standalone OpenAI node, it is usually $json.message.content.

typescript
1// Common correct expressions for OpenAI output:
2
3// If using OpenAI Chat Model sub-node with AI Agent:
4{{ $json.output }}
5
6// If using standalone OpenAI node (HTTP Request):
7{{ $json.choices[0].message.content }}
8
9// If referencing from a non-adjacent node:
10{{ $('OpenAI Chat Model').first().json.output }}
11
12// WRONG (common mistakes):
13// {{ $json.text }} — field doesn't exist
14// {{ $json.response }} — field doesn't exist
15// {{ $json.data.choices[0] }} — wrong nesting

Expected result: The expression correctly points to the field containing the OpenAI response text.

4

Check for content filter or safety blocks

OpenAI's content moderation can silently return empty responses when the prompt or expected output triggers safety filters. In the raw API response, look for a finish_reason field. If it says 'content_filter' instead of 'stop', OpenAI blocked the response. Also check for a content_filter_results object in the response. To fix this, review your system prompt and user prompt for content that might trigger filters. Rephrase the prompt to avoid flagged topics. If your use case is legitimate, you can apply for modified content policy access through OpenAI's platform. Add a Code node after the OpenAI node that checks the finish_reason and routes filtered responses to an alternative handling branch.

typescript
1// Code node: Check for content filter
2const items = $input.all();
3const results = [];
4
5for (const item of items) {
6 const finishReason = item.json.finish_reason || item.json.finishReason || '';
7 const content = item.json.message?.content || item.json.output || '';
8
9 results.push({
10 json: {
11 ...item.json,
12 wasFiltered: finishReason === 'content_filter',
13 wasEmpty: content.trim() === '',
14 finishReason: finishReason
15 }
16 });
17}
18
19return results;

Expected result: You can identify whether the empty response was caused by content filtering, token limit truncation, or another issue.

5

Add diagnostic logging with a Code node

Insert a Code node between your data source and the OpenAI node that logs the exact input being sent. This lets you see the complete prompt including all dynamic expressions after they have been resolved. Set the Code node to Run Once for All Items. Log the input data and then pass it through unchanged. After running the workflow, check this node's output in the execution data to see exactly what the OpenAI node received. Often you will find that a variable resolved to undefined or null, resulting in a prompt like 'Summarize the following: undefined' which may cause the model to return empty content.

typescript
1// Code node: Diagnostic Logger
2// Place between data source and OpenAI node
3const items = $input.all();
4
5for (const item of items) {
6 console.log('OpenAI input payload:', JSON.stringify(item.json, null, 2));
7
8 // Flag suspicious values
9 const prompt = item.json.prompt || item.json.text || item.json.message || '';
10 if (!prompt || prompt.includes('undefined') || prompt.includes('null')) {
11 console.log('WARNING: Prompt contains empty or undefined values');
12 }
13}
14
15// Pass through unchanged
16return items;

Expected result: You can see the exact resolved prompt text in the Code node output, revealing any undefined variables or malformed content.

6

Verify API key permissions and account status

An expired, revoked, or quota-exceeded API key can sometimes produce empty responses instead of clear error messages, especially when using older n8n node versions. Go to the OpenAI platform dashboard at platform.openai.com and check your API key status, billing, and usage limits. Verify that the key has access to the model you selected in the n8n node. If you recently rotated keys, update the credential in n8n by going to Credentials in the left sidebar, finding your OpenAI credential, and entering the new key. Test the updated credential by running the workflow with the hardcoded prompt from step 2.

Expected result: The API key is confirmed active with sufficient quota and correct model access permissions.

Complete working example

openai-response-debugger.js
1// Code node: OpenAI Response Debugger
2// Place AFTER the OpenAI node to diagnose empty responses
3// Mode: Run Once for All Items
4
5const items = $input.all();
6const diagnostics = [];
7
8for (const item of items) {
9 const data = item.json;
10 const diagnostic = {
11 timestamp: new Date().toISOString(),
12 executionId: $execution.id,
13 hasOutput: false,
14 outputLength: 0,
15 finishReason: 'unknown',
16 issue: 'none',
17 rawKeys: Object.keys(data)
18 };
19
20 // Check common output field paths
21 const outputPaths = [
22 data.output,
23 data.text,
24 data.message?.content,
25 data.choices?.[0]?.message?.content,
26 data.response
27 ];
28
29 const foundOutput = outputPaths.find(p => p !== undefined && p !== null);
30
31 if (foundOutput) {
32 diagnostic.hasOutput = true;
33 diagnostic.outputLength = String(foundOutput).length;
34 diagnostic.outputPreview = String(foundOutput).substring(0, 200);
35 }
36
37 // Check finish reason
38 const fr = data.finish_reason || data.finishReason ||
39 data.choices?.[0]?.finish_reason;
40 if (fr) diagnostic.finishReason = fr;
41
42 // Identify the issue
43 if (!foundOutput || String(foundOutput).trim() === '') {
44 if (fr === 'content_filter') {
45 diagnostic.issue = 'CONTENT_FILTER: Response blocked by safety filter';
46 } else if (fr === 'length') {
47 diagnostic.issue = 'MAX_TOKENS: Response cut off, increase max_tokens';
48 } else if (data.error) {
49 diagnostic.issue = 'API_ERROR: ' + JSON.stringify(data.error);
50 } else {
51 diagnostic.issue = 'EMPTY_RESPONSE: No content returned, check prompt';
52 }
53 }
54
55 diagnostics.push({
56 json: {
57 ...data,
58 _diagnostic: diagnostic
59 }
60 });
61}
62
63return diagnostics;

Common mistakes when debugging Why Your OpenAI Response Is Empty in n8n

Why it's a problem: Referencing $json.text when the OpenAI node outputs $json.output or $json.message.content

How to avoid: Use the expression editor's variable browser to find the exact field name. Click the field, do not type it manually.

Why it's a problem: Assuming an empty response means the API call failed when it actually succeeded with a content filter block

How to avoid: Check the finish_reason field in the raw response. 'content_filter' indicates a safety block, not an error.

Why it's a problem: Using a deprecated or incorrect model name in the OpenAI node

How to avoid: Check the OpenAI models documentation for current model IDs. Use the dropdown in the n8n node rather than typing a model name manually.

Why it's a problem: Not saving execution data, making it impossible to debug past runs

How to avoid: Enable Save Execution Progress and set a reasonable execution data retention period in n8n settings.

Why it's a problem: Passing undefined variables in the prompt due to incorrect upstream node references

How to avoid: Add a diagnostic Code node before the OpenAI node that validates all dynamic inputs are defined and non-empty.

Best practices

  • Always check execution data before assuming the API call failed; empty responses often mean success with no content
  • Use the expression editor variable selector instead of typing paths manually to avoid typos
  • Set Save Execution Progress to true in workflow settings to preserve intermediate node outputs for debugging
  • Add a validation Code node after every LLM call that checks for empty or undefined output before proceeding
  • Test OpenAI nodes with hardcoded prompts before connecting dynamic inputs
  • Monitor your OpenAI API usage dashboard alongside n8n executions to correlate empty responses with quota limits
  • Use the On Error: Continue setting on OpenAI nodes so you can inspect error details in the error output branch
  • Keep n8n updated to the latest version as OpenAI node improvements frequently fix silent failure modes

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n workflow calls OpenAI but the response comes back empty with no error. The workflow shows successful execution. How do I systematically debug this? I need to check the raw API response, verify my expression paths, and identify if content filters are blocking the output.

n8n Prompt

My OpenAI Chat Model node in n8n returns empty output. Add a Code node after it that checks if the response is empty, logs the finish_reason, and routes to an error branch if no content was returned. Show me the correct expression to read the OpenAI output.

Frequently asked questions

Why does my OpenAI node show a successful execution but return empty output?

The API call succeeded at the HTTP level (200 status), but the response content is empty. This happens when content filters block the output, the prompt is malformed, or the max_tokens is set to 0. Check the finish_reason field in the execution data.

How do I see the raw API response from OpenAI in n8n?

Click on the OpenAI node in the execution view, then switch to the JSON view using the {} icon in the output panel. This shows the complete response object including choices, finish_reason, and usage fields.

What does finish_reason content_filter mean?

It means OpenAI's safety system blocked the response. The model generated output but it was removed before being returned. Rephrase your prompt to avoid triggering content moderation, or check if your system prompt inadvertently requests disallowed content.

Can a rate limit cause an empty response instead of an error?

Typically, rate limits return a 429 error, not an empty response. However, if the n8n node has retry logic and the retry succeeds with a degraded response, you might see empty content. Check the execution timing for unusually long durations.

Why does the OpenAI response work in the Playground but not in n8n?

The most common cause is different model versions or parameters. The Playground may default to a different model, temperature, or max_tokens. Ensure the settings in your n8n node exactly match your Playground configuration.

How do I handle empty responses without stopping my workflow?

Set the OpenAI node's On Error to Continue, then add an IF node after it that checks if the output is empty. Route empty responses to a fallback branch that either retries with a different prompt or returns a default message.

Does n8n cache OpenAI responses?

No, n8n does not cache API responses by default. Each execution makes a fresh API call. If you see stale data, it might be from data pinning in the editor. Unpin data and run the workflow again to get a fresh response.

Can RapidDev help debug complex n8n + OpenAI integration issues?

Yes, RapidDev's engineering team specializes in n8n workflow debugging and can trace data flow issues, API integration problems, and expression errors to get your workflows running reliably.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.