/n8n-tutorials

How to stop n8n from cutting off long language model responses?

Learn how to prevent n8n from cutting off long language model responses with simple settings tweaks for smooth, complete AI outputs.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to stop n8n from cutting off long language model responses?

The most reliable way to stop n8n from cutting off long language‑model responses is to increase its internal payload limits (especially N8N_PAYLOAD_SIZE\_MAX) and make sure the AI node or HTTP Request node asking the model is not running into its own timeout or token limits. In real workflows, the cutoff almost always happens because n8n hits a payload/timeout ceiling, not because the model stops.

 

Why n8n cuts long LLM responses

 

n8n enforces limits to protect itself from huge JSON blobs. Two things matter the most:

  • Payload size limit — n8n will silently truncate the incoming data if it exceeds N8N_PAYLOAD_SIZE\_MAX. This is the most common reason long LLM outputs get cut.
  • Timeout limits — the node calling the model (AI node or HTTP Request node) may time out before the model finishes generating, causing early termination.

These are n8n server limits, not limits inside the workflow.

 

Fix Step 1: Increase N8N_PAYLOAD_SIZE\_MAX

 

In production (Docker or server), set a higher payload limit. For large LLM outputs, 16–64 MB is typical.

Example for Docker Compose:

environment:
  - N8N_PAYLOAD_SIZE_MAX=64mb // allows large JSON responses

Example for a plain environment variable:

export N8N_PAYLOAD_SIZE_MAX=64mb

Restart n8n after changing this. Without this step, n8n continues to cut responses no matter what you do inside the workflow.

 

Fix Step 2: Increase timeout for the node generating the response

 

If you use:

  • HTTP Request node: Increase the Timeout field. If left too low (like 30s or 60s), long LLM outputs fail mid‑stream.
  • OpenAI node / other AI nodes: These generally follow the global node timeout set in the n8n settings or environment variable N8N_DEFAULT_TIMEZONE. If using proxy model endpoints via HTTP Request, the HTTP timeout is the actual limit.

Example of setting HTTP timeout to 180 seconds:

// in the node UI:
// Timeout: 180000  (milliseconds)

 

Fix Step 3: Make sure your model itself is allowed to produce long outputs

 

Even if n8n can accept large responses, the model may be capped by:

  • max\_tokens setting
  • provider‑level request size limits

For example, if you call a model via HTTP Request node:

{
  "model": "gpt-4.1",
  "max_tokens": 8000,
  "messages": [
    { "role": "user", "content": "Write a long report..." }
  ]
}

If max\_tokens is too small, the model stops early regardless of n8n settings.

 

Fix Step 4: Use streaming carefully (or avoid it)

 

n8n does not support incremental-streaming LLM responses inside normal workflows. When you turn on streaming with some APIs, the server may send partial chunks and close the connection early. n8n often treats this as a complete response, which looks like a cutoff.

If you need truly long responses:

  • Turn stream = false for OpenAI‑like APIs, so the full response arrives as one JSON.

 

Fix Step 5: If response becomes extremely large, store outside n8n

 

Once an LLM output gets bigger than ~30–50 MB, it’s safer to store the raw text in S3, database, or filesystem instead of flowing it through the n8n UI. n8n will accept the payload, but the UI might struggle to display it.

Pattern for this:

  • LLM response arrives in n8n.
  • Immediately save content to an external store via HTTP Request, S3 node, or database node.
  • Pass only a link or an ID between steps.

 

Summary

 

If n8n is cutting off long LLM responses, increase N8N_PAYLOAD_SIZE_MAX, increase node timeouts, disable streaming, and ensure your model max_tokens is high enough. These are the real production‑level fixes. Once these limits are raised, n8n will reliably pass the full response through the workflow without truncation.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022