/n8n-tutorials

How to monitor success rates of language model calls in n8n?

Learn how to track and improve success rates of language model calls in n8n with easy monitoring tips for reliable workflows.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to monitor success rates of language model calls in n8n?

A simple and reliable way to monitor the success rate of your language‑model calls in n8n is to log every call’s result (success or failure) into a persistent store — usually a database table or a Google Sheet — and then visualize or calculate the success rate from that data. You add a small “logging branch” after your LLM node: one path for successful outputs and one for errors (using an Error Trigger workflow or the “Continue On Fail” option). That gives you long‑term, queryable monitoring instead of just checking the n8n UI.

 

Why this works

 

Every workflow execution in n8n is isolated. Once it finishes, you only have logs in the UI, which are temporary, especially if you’re on n8n Cloud or if your server rotates logs. So if you want true monitoring, you need to write a record somewhere external each time a language‑model node runs. That external data store becomes your “analytics layer,” and n8n simply contributes events to it.

  • Success case: Log that the node returned normally.
  • Error case: Log that it failed, including the error message.
  • Later: Compute success rate = successes / (successes + failures).

 

Practical production setup (recommended)

 

You typically do this inside the same workflow that calls the model.

  • Enable Continue On Fail on the language-model node. This prevents the workflow from stopping when the model errors, and lets you inspect both success and error payloads.
  • Immediately after the model node, add a Function node to classify the result into status: "success" or status: "error".
  • Send that result to a persistent store: Postgres/MySQL, Supabase, Google Sheets, Airtable — anything stable.
  • Optionally, use your BI tool (Metabase, Google Looker Studio, Grafana) to build a success‑rate chart.

 

Example: classify success vs failure

 

This Function node runs immediately after your LLM node:

// We assume the previous node is something like OpenAI, Google Gemini, HuggingFace, etc.
// n8n passes the node's raw output in "items".

return items.map(item => {
  const didError = !!item.error;            // n8n sets item.error if node ran under "Continue On Fail"
  const success = !didError;

  return {
    json: {
      timestamp: new Date().toISOString(),
      status: success ? "success" : "error",
      errorMessage: didError ? item.error.message : null,
      modelResponse: success ? item.json : null     // Keep or strip; depends on privacy needs
    }
  }
});

 

Logging to your database

 

Once the Function node produces structured monitoring data, connect it to a database node (Postgres/MySQL). Insert a record for every call:

INSERT INTO llm_monitoring (timestamp, status, error_message)
VALUES ({{$json.timestamp}}, {{$json.status}}, {{$json.errorMessage}});

In n8n this goes inside a Postgres node using Query mode with expressions.

  • Pros: Reliable, queryable, historical metrics.
  • Production note: Databases handle this far better than n8n’s built‑in execution log.

 

Alternative: use an Error Workflow

 

You can also capture failures through a separate “Error Trigger” workflow. n8n automatically runs this when any workflow fails (not just LLM calls). This only works if you allow the LLM node to throw errors normally (meaning Continue On Fail = false).

  • Main workflow: LLM node throws error → execution stops.
  • Error workflow: Receives error → log it to DB or send alert.

The downside: you cannot easily log successes in this pattern, so you usually still add a success logger in the main workflow.

 

Simple daily success‑rate calculation workflow

 

Once you’re logging results, create a cron‑triggered workflow that queries your database and computes success rate for the last 24 hours. You can then email or Slack the metrics. It’s usually one Postgres node plus one Function node doing:

// rows: [{status: "success"}, {status: "error"}, ...]
let successes = 0;
let failures = 0;

for (const row of items.map(i => i.json)) {
  if (row.status === "success") successes++;
  else failures++;
}

return [
  {
    json: {
      successRate: successes / (successes + failures)
    }
  }
];

 

What to avoid

 

  • Do not rely on n8n’s execution list as analytics. It’s not meant for KPIs and can be auto‑purged.
  • Do not store metrics inside the same workflow execution alone. It disappears when execution is pruned.
  • Do not use overly complex branching or Switch nodes. LLM failures are straightforward: success vs error.

 

In short

 

The most reliable way to monitor language‑model success rates in n8n is to log every model call into a stable external data store, capturing both successes and failures, and then calculate your metrics from that data. Use Continue On Fail or an Error Trigger to guarantee that you always capture both sides. This pattern is simple, production‑safe, and used in real n8n deployments.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022