Learn how to track and improve success rates of language model calls in n8n with easy monitoring tips for reliable workflows.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
A simple and reliable way to monitor the success rate of your language‑model calls in n8n is to log every call’s result (success or failure) into a persistent store — usually a database table or a Google Sheet — and then visualize or calculate the success rate from that data. You add a small “logging branch” after your LLM node: one path for successful outputs and one for errors (using an Error Trigger workflow or the “Continue On Fail” option). That gives you long‑term, queryable monitoring instead of just checking the n8n UI.
Every workflow execution in n8n is isolated. Once it finishes, you only have logs in the UI, which are temporary, especially if you’re on n8n Cloud or if your server rotates logs. So if you want true monitoring, you need to write a record somewhere external each time a language‑model node runs. That external data store becomes your “analytics layer,” and n8n simply contributes events to it.
You typically do this inside the same workflow that calls the model.
This Function node runs immediately after your LLM node:
// We assume the previous node is something like OpenAI, Google Gemini, HuggingFace, etc.
// n8n passes the node's raw output in "items".
return items.map(item => {
const didError = !!item.error; // n8n sets item.error if node ran under "Continue On Fail"
const success = !didError;
return {
json: {
timestamp: new Date().toISOString(),
status: success ? "success" : "error",
errorMessage: didError ? item.error.message : null,
modelResponse: success ? item.json : null // Keep or strip; depends on privacy needs
}
}
});
Once the Function node produces structured monitoring data, connect it to a database node (Postgres/MySQL). Insert a record for every call:
INSERT INTO llm_monitoring (timestamp, status, error_message)
VALUES ({{$json.timestamp}}, {{$json.status}}, {{$json.errorMessage}});
In n8n this goes inside a Postgres node using Query mode with expressions.
You can also capture failures through a separate “Error Trigger” workflow. n8n automatically runs this when any workflow fails (not just LLM calls). This only works if you allow the LLM node to throw errors normally (meaning Continue On Fail = false).
The downside: you cannot easily log successes in this pattern, so you usually still add a success logger in the main workflow.
Once you’re logging results, create a cron‑triggered workflow that queries your database and computes success rate for the last 24 hours. You can then email or Slack the metrics. It’s usually one Postgres node plus one Function node doing:
// rows: [{status: "success"}, {status: "error"}, ...]
let successes = 0;
let failures = 0;
for (const row of items.map(i => i.json)) {
if (row.status === "success") successes++;
else failures++;
}
return [
{
json: {
successRate: successes / (successes + failures)
}
}
];
The most reliable way to monitor language‑model success rates in n8n is to log every model call into a stable external data store, capturing both successes and failures, and then calculate your metrics from that data. Use Continue On Fail or an Error Trigger to guarantee that you always capture both sides. This pattern is simple, production‑safe, and used in real n8n deployments.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.