/n8n-tutorials

How to debug why model responses are not reaching the next node in n8n?

Learn how to debug why model responses fail to reach the next node in n8n with clear steps to fix workflow issues and ensure smooth automation.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to debug why model responses are not reaching the next node in n8n?

If a model node (like OpenAI, Anthropic, Local LLM, etc.) runs successfully but its output never reaches the next node, the fastest way to debug is to open the node’s Execution Data panel and confirm whether the node is actually returning items[]. n8n only passes data forward if the node outputs at least one item. If the output is empty, nested too deeply, or the next node is referencing the wrong field, the downstream node will see “no data” and simply not run or run with empty input. So the first thing to check is what the model actually returned inside the node’s JSON, then confirm your expressions (like {{$json["text"]}}) match that structure exactly.

 

Why Responses Fail to Reach the Next Node

 

The usual cause is that the model node technically “succeeded”, but its output isn’t in the format the next node expects. n8n passes data as an array called items, and each item has a json object. If the model’s text ends up inside a field you’re not referencing — for example data.completion instead of text — the next node receives nothing meaningful. Another common issue is that the node outputs an empty array, which stops the data flow entirely.

  • Nodes only receive what the previous node outputs. No output → no input.
  • Expressions must match exact paths. Even one wrong key means undefined.
  • Errors inside the node are swallowed if “Continue on Fail” is turned on.

 

Step-by-Step Debugging That Actually Works in Production

 

Below is the practical, real‑world method we use when debugging production n8n workflows involving LLM nodes.

  • Open the model node’s Execution Data. Look at “Output Data” → “JSON”. Confirm you see something like:
[
  {
    "json": {
      "text": "This is the model response." // Real example shape from Text Generation node
    }
  }
]
  • If this array is empty, the next node will not run. That means the model returned nothing or the node didn’t produce structured output.
  • Check if the model threw an error but “Continue on Fail” is enabled. In that case the node still “succeeds” but outputs:
[
  {
    "json": {
      "error": "Model request failed" // Example
    }
  }
]
  • The next node gets this instead of the actual model result. Remove “Continue on Fail” temporarily when debugging.
  • Confirm the correct JSON field in your expressions. For example, if the next node uses:
{{$json["response"]}}

…but the model node output is…

{"text": "hello"}

…then the expression returns undefined. Fix it to:

{{$json["text"]}}
  • Check for nested response shapes in custom API model calls. Many LLM APIs return something like:
{
  "choices": [
    {
      "message": {
        "content": "hi there"
      }
    }
  ]
}

Your expression must follow that structure exactly:

{{$json["choices"][0]["message"]["content"]}}
  • Verify the node did not switch to “Binary” output. If a node outputs binary data, downstream nodes expecting JSON will see nothing useful. LLM nodes should always output JSON.
  • Use a temporary “Set” node after the model node. This is one of the most effective debugging tricks. Add a Set node, switch it to “Keep Only Set”, and set a field like:
model_output = {{$json}}

This forces you to inspect exactly what JSON survives to the next step.

  • Check Run Data for the execution path. If the model node did not execute, verify that the previous node produced output items.
  • Check workflow settings for “Execute Once” on triggers. Sometimes a trigger only fires once, so you think the model is not passing data, but actually the node never ran again.

 

Practical Example Fix

 

Imagine the OpenAI node returns:

{
  "text": "User summary generated."
}

But your next node uses:

{{$json["data"]["content"]}}

That will output nothing. The correct expression is:

{{$json["text"]}}

After correction, the next node receives the response and executes normally.

 

Most Common Root Causes in Production

 

  • Wrong JSON path in expressions. By far the #1 issue.
  • Node output is empty because the API returned no content.
  • Continue on Fail hides an error.
  • Model node returned a structure that doesn’t match your assumption.
  • Next node expects binary or JSON incorrectly.
  • Previous node produced 0 items, so downstream nodes never run.

 

The rule of thumb: Always trust what Run Data shows, not what you think the node should output. n8n will only pass exactly what appears as items[i].json.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022