Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Debug Why Model Responses Are Not Reaching the Next Node in n8n

When LLM model responses are not reaching the next node in n8n, the most common causes are incorrect output field mapping, empty responses from the model, the node's On Error setting silently swallowing errors, or expression paths that reference the wrong property name. Check the LLM node's output panel first, then verify that downstream nodes reference the correct JSON path using {{ $json.message.content }} or the field your specific LLM node outputs.

What you'll learn

  • How to inspect the output of an LLM node in the n8n editor
  • How to identify the correct JSON path for model responses
  • How to fix expression mismatches between nodes
  • How to handle empty or error responses from LLM nodes
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner8 min read10-20 minutesn8n 1.0+ (self-hosted and Cloud)March 2026RapidDev Engineering Team
TL;DR

When LLM model responses are not reaching the next node in n8n, the most common causes are incorrect output field mapping, empty responses from the model, the node's On Error setting silently swallowing errors, or expression paths that reference the wrong property name. Check the LLM node's output panel first, then verify that downstream nodes reference the correct JSON path using {{ $json.message.content }} or the field your specific LLM node outputs.

Tracing Data Flow from LLM Nodes to Downstream Nodes in n8n

One of the most frustrating n8n issues is when a language model returns a response but the next node in the chain acts as if it received nothing. The data is somewhere between the LLM node and the downstream node, and finding exactly where it gets lost requires systematic debugging. This tutorial teaches you how to inspect node outputs, verify expression paths, and fix the most common data flow breaks.

Prerequisites

  • A running n8n instance with a workflow that includes an LLM node
  • An active API credential for your LLM provider (OpenAI, Anthropic, etc.)
  • Basic understanding of n8n expressions ({{ }} syntax)

Step-by-step guide

1

Inspect the LLM node's output panel

The first step is confirming that the LLM node actually produced output. Click on the LLM node after a test execution and look at the Output tab. Switch to JSON view (not Table view) to see the raw data structure. If the output panel is empty, the LLM call itself failed or returned nothing. If the output panel shows data, the issue is in how downstream nodes reference it. Pay attention to the exact property names and nesting — the response text might be under .text, .content, .message.content, or .output depending on which LLM node you use.

Expected result: You can see the full JSON output from the LLM node, including the response text and any metadata.

2

Verify the JSON path in downstream expressions

The most common cause of missing data is a mismatched JSON path. Different LLM nodes in n8n output data under different property names. The OpenAI Chat Model node outputs under a different structure than the HTTP Request node calling OpenAI directly. Click on the downstream node that appears to receive no data, find the expression field that should reference the LLM output, and check that the path matches exactly. Use the expression editor's autocomplete feature — click the curly brace icon and browse the available fields from the previous node.

typescript
1// Common output paths for different LLM setups:
2
3// AI Agent node or LLM Chain node:
4{{ $json.output }}
5// or
6{{ $json.text }}
7
8// OpenAI Chat Model connected via AI Agent:
9{{ $json.output }}
10
11// HTTP Request node calling OpenAI API directly:
12{{ $json.choices[0].message.content }}
13
14// HTTP Request node calling Anthropic API directly:
15{{ $json.content[0].text }}
16
17// Basic LLM Chain node:
18{{ $json.response.text }}

Expected result: The expression in the downstream node references the correct JSON path and shows the LLM response in the preview.

3

Add a debug Code node between LLM and downstream nodes

Insert a Code node between the LLM node and the downstream node to log the exact data structure being passed. This node outputs the full item structure so you can see every available property. After running the workflow, inspect this debug node's output to find the correct path to the response text. Once you identify the right path, update the downstream node's expression and remove the debug node.

typescript
1// Debug Code node — Run Once for All Items
2const items = $input.all();
3const debugOutput = [];
4
5for (const item of items) {
6 debugOutput.push({
7 json: {
8 _debug_keys: Object.keys(item.json),
9 _debug_full_data: item.json,
10 _debug_type: typeof item.json,
11 _debug_has_output: 'output' in item.json,
12 _debug_has_text: 'text' in item.json,
13 _debug_has_content: 'content' in item.json,
14 _debug_has_choices: 'choices' in item.json,
15 _debug_has_message: 'message' in item.json
16 }
17 });
18}
19
20return debugOutput;

Expected result: The debug node output shows all available property names, making it clear which JSON path contains the LLM response.

4

Check for empty responses and error swallowing

If the LLM node shows output but the response text is empty, the model may have returned a blank response due to content filtering, token limit exhaustion, or an invalid prompt. Check the node output for any error fields or status indicators. Also check the node's On Error setting — if it is set to Continue (Using Error Output), errors go to a separate branch and the main output receives nothing. If On Error is set to Continue (Regular Output), the error data might overwrite the expected response structure.

Expected result: You identify whether the response is genuinely empty (model issue) or being routed to an error output branch (configuration issue).

5

Use the $('NodeName') reference for explicit data sourcing

If your workflow has branches or multiple paths, the downstream node might be pulling data from the wrong upstream node. Instead of relying on implicit data flow (which uses the directly connected node), use the explicit $('NodeName') syntax to reference the exact LLM node by name. This eliminates ambiguity about which node's output is being used, especially in workflows with multiple merge points or branches.

typescript
1// Instead of:
2{{ $json.output }}
3
4// Use explicit node reference:
5{{ $('OpenAI Chat Model').item.json.output }}
6
7// Or for the AI Agent node:
8{{ $('AI Agent').item.json.output }}
9
10// Access all items from a node:
11{{ $('OpenAI Chat Model').all()[0].json.output }}

Expected result: The downstream node explicitly references the LLM node by name and reliably receives the correct data regardless of workflow structure.

Complete working example

llm-output-validator.js
1// Code node: Validate and normalize LLM output
2// Place after any LLM node to ensure consistent data shape
3// Handles different output formats from various LLM nodes
4
5const items = $input.all();
6const results = [];
7
8for (const item of items) {
9 const data = item.json;
10 let responseText = null;
11 let source = 'unknown';
12
13 // Try all known output paths
14 if (data.output && typeof data.output === 'string') {
15 responseText = data.output;
16 source = 'AI Agent / LLM Chain';
17 } else if (data.text && typeof data.text === 'string') {
18 responseText = data.text;
19 source = 'Basic LLM Chain';
20 } else if (data.choices && data.choices[0]?.message?.content) {
21 responseText = data.choices[0].message.content;
22 source = 'OpenAI HTTP Request';
23 } else if (data.content && Array.isArray(data.content)) {
24 responseText = data.content
25 .filter(block => block.type === 'text')
26 .map(block => block.text)
27 .join('\n');
28 source = 'Anthropic HTTP Request';
29 } else if (data.message?.content) {
30 responseText = data.message.content;
31 source = 'Generic chat format';
32 } else if (data.response?.text) {
33 responseText = data.response.text;
34 source = 'Response wrapper';
35 }
36
37 results.push({
38 json: {
39 responseText: responseText || '',
40 hasResponse: responseText !== null && responseText.length > 0,
41 responseLength: (responseText || '').length,
42 detectedSource: source,
43 _originalKeys: Object.keys(data)
44 }
45 });
46}
47
48return results;

Common mistakes when debugging Why Model Responses Are Not Reaching the Next Node in

Why it's a problem: Using {{ $json.text }} when the LLM node outputs under {{ $json.output }}

How to avoid: Check the LLM node's actual output in JSON view. Different node types use different property names. AI Agent and LLM Chain nodes typically use .output.

Why it's a problem: Not realizing the On Error setting routes errors to a separate branch, leaving the main output empty

How to avoid: Check the On Error setting on the LLM node. Set it to Stop Workflow during debugging to make errors visible.

Why it's a problem: Using Table view to inspect output, which hides nested properties

How to avoid: Switch to JSON view in the node output panel to see the complete data structure including nested objects and arrays.

Why it's a problem: Referencing the wrong upstream node in workflows with multiple branches

How to avoid: Use $('NodeName').item.json.property to explicitly reference the correct source node by name.

Best practices

  • Always inspect the LLM node's output in JSON view before assuming data is missing
  • Use the expression editor's autocomplete to browse available fields from connected nodes
  • Add a normalizer Code node after LLM nodes to standardize the output format across different providers
  • Use explicit $('NodeName') references instead of implicit $json when workflows have complex branching
  • Set On Error to Stop Workflow during debugging so errors surface immediately
  • Pin test data on the LLM node to debug downstream nodes without making API calls
  • Check for empty string responses, not just null or undefined — models can return '' on content filter blocks

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

In my n8n workflow, the OpenAI Chat Model node returns a response, but the next Set node shows empty values. Help me identify the correct JSON path for the response text and fix the expression so the data flows correctly to downstream nodes.

n8n Prompt

Create a workflow with a Webhook trigger, an AI Agent node using OpenAI, and a Code node that validates the AI response is not empty before sending it back via Respond to Webhook. Include error handling for empty responses.

Frequently asked questions

Why does my downstream node show undefined for the LLM response?

The expression path does not match the actual output structure. Open the LLM node output in JSON view, find the exact property name, and update the expression. Common paths are $json.output (AI Agent), $json.choices[0].message.content (OpenAI HTTP), and $json.content[0].text (Anthropic HTTP).

The LLM node output is completely empty. What went wrong?

Check the execution log for errors. The API call may have failed due to invalid credentials, rate limits, or content filters. Also check that the LLM node has a model selected and a prompt configured. An empty prompt produces an empty response.

How do I access the response from a node that is not directly connected?

Use the $('NodeName') syntax. For example, {{ $('My AI Agent').item.json.output }} references the output of a node named My AI Agent regardless of the connection path.

Does the AI Agent node output differently from the LLM Chain node?

Yes. The AI Agent node typically outputs under .output, while the Basic LLM Chain node may output under .text or .response.text. Always check the actual output in JSON view.

Why does the Table view show data but my expression returns empty?

Table view can display nested data under simplified column names that do not match the actual JSON path. Switch to JSON view and use the exact property path shown there.

Can I set a default value if the LLM response is empty?

Yes, use an expression with a fallback: {{ $json.output || 'No response received from the model.' }}. This provides a default string when the output property is empty, null, or undefined.

Can RapidDev help troubleshoot data flow issues in my n8n AI workflows?

Yes, RapidDev specializes in debugging and optimizing n8n workflows involving LLM nodes. Their engineering team can trace data flow issues, normalize output formats, and build robust error handling for AI-powered automations.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.