/n8n-tutorials

How to fix “ETIMEDOUT” errors when calling a large language model in n8n?

Learn practical steps to fix ETIMEDOUT errors in n8n when calling large language models and keep your workflows running smoothly.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to fix “ETIMEDOUT” errors when calling a large language model in n8n?

A practical fix for ETIMEDOUT errors when calling a large language model in n8n is to reduce how long n8n waits for the model to reply, break the request into smaller chunks, or move the long‑running call outside the regular HTTP Request node (for example by using a webhook‑callback style flow). In production, ETIMEDOUT almost always means the model is taking longer to answer than n8n’s node timeout, or the payload is too big, or the LLM provider is rate‑limiting. The fastest reliable fix is usually: increase the HTTP Request timeout, reduce payload size, and implement retries with backoff.

 

Why ETIMEDOUT happens in n8n

 

ETIMEDOUT means n8n opened a connection to the LLM API and didn’t get a response before the configured timeout. It is not an n8n bug. It’s the network or API taking too long. LLMs often take 20–60s+ to answer when prompts are large or models are slow.

  • HTTP Request node default timeout is low (usually around 10s unless changed). Slow LLMs exceed it.
  • Payloads too large (big prompts or documents) increase response time.
  • Rate limits on OpenAI/Anthropic/etc. cause delayed responses or queued requests.
  • Self‑hosted n8n behind proxies (NGINX/Cloudflare/etc.) may have their own request timeouts.

 

Fixes that work in production

 

The actions below are safe, real, and commonly used in actual n8n deployments.

  • Increase the HTTP Request timeout inside the node. This is the quickest and usually solves it.
  • Reduce the size of the LLM prompt/input. Send summaries instead of full text. Use chunking before calling the model.
  • Add retry logic. Use the node’s built‑in "Retry On Fail" or wrap in an Error Workflow.
  • If you are self‑hosting: increase reverse‑proxy timeouts (NGINX, Traefik, Cloudflare). These often cut connections before n8n does.
  • Consider asynchronous patterns. Some LLM providers offer async endpoints. n8n can receive results via Webhook node instead of waiting synchronously.
  • Use the official n8n LLM nodes when possible. They usually handle streaming and longer execution times better than a raw HTTP Request node.

 

How to increase timeout in the HTTP Request node

 

This is the most common and most reliable fix.

  • Open the HTTP Request node.
  • Go to "Settings".
  • Find "Timeout".
  • Set it to something like 60000 ms (60 seconds) or higher if needed.

You can also set it via expression if you want environment‑based values.

 

{
  "timeout": 60000 // 60 seconds
}

 

Chunking large prompts before calling LLM

 

If your input text is very large, split it into smaller parts and process each part in a loop (Split In Batches + Item Lists). This not only avoids timeouts but also avoids max‑token issues.

 

// Example inside a Function node to chunk text
const text = $json.input;
const size = 4000; // characters per chunk
const chunks = [];

for (let i = 0; i < text.length; i += size) {
  chunks.push({ chunk: text.slice(i, i + size) });
}

return chunks;

 

If you self‑host: increase proxy timeouts

 

NGINX example:

proxy_read_timeout 300s;   // allow LLM to respond slowly
proxy_connect_timeout 300s;
proxy_send_timeout 300s;

Without this, NGINX may drop the connection before n8n finishes.

 

Implement retry/backoff

 

When LLM providers throttle you, retries help.

  • Open the HTTP Request node.
  • Enable "Retry On Fail".
  • Set "Max Attempts" (e.g., 3–5).
  • Set "Retry Delay" (e.g., 2000 ms).

 

Use async patterns when the request is too long

 

Some LLMs offer asynchronous job creation. The idea:

  • First workflow sends the job → LLM returns a job ID quickly.
  • LLM calls your n8n Webhook when the result is ready.
  • Webhook triggers a new workflow to process the result.

This avoids timeouts entirely because n8n is never waiting synchronously.

 

Summary

 

You fix ETIMEDOUT in n8n by increasing the HTTP Request timeout, reducing prompt size, adding retries, adjusting proxy timeouts if self‑hosted, and using async patterns for very long operations. In practice, raising timeout + reducing payload size solves 80% of cases immediately.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022