Learn practical steps to fix ETIMEDOUT errors in n8n when calling large language models and keep your workflows running smoothly.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
A practical fix for ETIMEDOUT errors when calling a large language model in n8n is to reduce how long n8n waits for the model to reply, break the request into smaller chunks, or move the long‑running call outside the regular HTTP Request node (for example by using a webhook‑callback style flow). In production, ETIMEDOUT almost always means the model is taking longer to answer than n8n’s node timeout, or the payload is too big, or the LLM provider is rate‑limiting. The fastest reliable fix is usually: increase the HTTP Request timeout, reduce payload size, and implement retries with backoff.
ETIMEDOUT means n8n opened a connection to the LLM API and didn’t get a response before the configured timeout. It is not an n8n bug. It’s the network or API taking too long. LLMs often take 20–60s+ to answer when prompts are large or models are slow.
The actions below are safe, real, and commonly used in actual n8n deployments.
This is the most common and most reliable fix.
You can also set it via expression if you want environment‑based values.
{
"timeout": 60000 // 60 seconds
}
If your input text is very large, split it into smaller parts and process each part in a loop (Split In Batches + Item Lists). This not only avoids timeouts but also avoids max‑token issues.
// Example inside a Function node to chunk text
const text = $json.input;
const size = 4000; // characters per chunk
const chunks = [];
for (let i = 0; i < text.length; i += size) {
chunks.push({ chunk: text.slice(i, i + size) });
}
return chunks;
NGINX example:
proxy_read_timeout 300s; // allow LLM to respond slowly
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
Without this, NGINX may drop the connection before n8n finishes.
When LLM providers throttle you, retries help.
Some LLMs offer asynchronous job creation. The idea:
This avoids timeouts entirely because n8n is never waiting synchronously.
You fix ETIMEDOUT in n8n by increasing the HTTP Request timeout, reducing prompt size, adding retries, adjusting proxy timeouts if self‑hosted, and using async patterns for very long operations. In practice, raising timeout + reducing payload size solves 80% of cases immediately.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.