/n8n-tutorials

How to fix a language model refusing to answer with system prompts in n8n?

Learn how to fix language models ignoring system prompts in n8n with easy steps to ensure accurate responses in your automation workflows.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to fix a language model refusing to answer with system prompts in n8n?

The fix is to explicitly send the “system prompt” as part of the model’s messages array, not as a separate field, because most modern language models (including n8n’s OpenAI and OpenAI‑Compatible nodes) ignore any “system” field outside of the messages array. So in n8n you must pass your system instructions as a message with role = system. If you're feeding your messages through expressions or other nodes, make sure the JSON you send to the LLM node includes a proper system role message and that you're not overwriting it later in the workflow.

 

Why the model refuses system prompts in n8n (and how to fix it)

 

Modern LLMs (OpenAI, OpenAI‑compatible, Cohere, etc.) accept instructions only through a messages array. n8n’s “OpenAI” node and most “Chat Model” nodes do not magically force system instructions if you put them somewhere else (for example, in a header, config field, or a variable called systemPrompt). If the model doesn’t see a JSON message with role: system, it simply behaves like the system instruction never existed.

In production workflows, this usually breaks when:

  • You build the message array in a previous Function node and accidentally drop the system message.
  • You pass only the user message into an LLM node, thinking n8n will merge your system prompt automatically – it won’t.
  • Some LLMs (like OpenAI’s newer models) ignore deprecated “system” fields unless it is placed exactly in the messages list.
  • You modify messages with an expression like {{$json}} and overwrite the entire structure.

 

How to fix it reliably

 

You must explicitly send the system instruction as a message object inside the messages array. The model will obey it only if it’s formatted exactly like this.

Here is a safe pattern using a Function node before the LLM node:

 

// This Function node builds clean LLM messages
return [
  {
    json: {
      messages: [
        {
          role: "system",
          content: "You are a helpful assistant. Follow these rules strictly."
        },
        {
          role: "user",
          content: "Explain how to fix the issue."
        }
      ]
    }
  }
];

 

Then, in your OpenAI / OpenAI-Chat / OpenAI-Compatible node, set:

  • Resource: Chat
  • Operation: Create Chat Completion
  • Messages: Use expression like {{$json["messages"]}}

This forces the node to send the system message exactly as the model expects.

 

Extra production‑grade notes

 

  • If you build messages dynamically (from a database, webhook, or long workflow), always log the final JSON before it hits the model. The “lost system prompt” is the most common cause of unexpected model behavior.
  • Don’t put system instructions in n8n’s “Prompt” field of the old OpenAI node if you’re using chat-completion mode. That field becomes a user message and the system instruction is ignored.
  • Some models enforce strict roles. If you send something like role: "instruction" or role: "assistant" with system content, the system message will be ignored.
  • If your system prompt is extremely long, consider storing it in an environment variable or reading it from a file to avoid mistakes when editing workflows.
  • After fixing, run a test execution and check the “RAW API request” section to confirm the message is being sent with role: system. This is the best way to debug LLM behavior in n8n.

 

Bottom line

 

The model isn’t refusing system prompts — it never received them. In n8n, the only reliable fix is to include the system prompt as a normal { role: "system", content: "..." } object inside the messages array. Once you construct the messages cleanly and pass them to the LLM node without overwriting them, the model will follow the instructions consistently.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022