Learn how to fix language models ignoring system prompts in n8n with easy steps to ensure accurate responses in your automation workflows.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
The fix is to explicitly send the “system prompt” as part of the model’s messages array, not as a separate field, because most modern language models (including n8n’s OpenAI and OpenAI‑Compatible nodes) ignore any “system” field outside of the messages array. So in n8n you must pass your system instructions as a message with role = system. If you're feeding your messages through expressions or other nodes, make sure the JSON you send to the LLM node includes a proper system role message and that you're not overwriting it later in the workflow.
Modern LLMs (OpenAI, OpenAI‑compatible, Cohere, etc.) accept instructions only through a messages array. n8n’s “OpenAI” node and most “Chat Model” nodes do not magically force system instructions if you put them somewhere else (for example, in a header, config field, or a variable called systemPrompt). If the model doesn’t see a JSON message with role: system, it simply behaves like the system instruction never existed.
In production workflows, this usually breaks when:
You must explicitly send the system instruction as a message object inside the messages array. The model will obey it only if it’s formatted exactly like this.
Here is a safe pattern using a Function node before the LLM node:
// This Function node builds clean LLM messages
return [
{
json: {
messages: [
{
role: "system",
content: "You are a helpful assistant. Follow these rules strictly."
},
{
role: "user",
content: "Explain how to fix the issue."
}
]
}
}
];
Then, in your OpenAI / OpenAI-Chat / OpenAI-Compatible node, set:
This forces the node to send the system message exactly as the model expects.
The model isn’t refusing system prompts — it never received them. In n8n, the only reliable fix is to include the system prompt as a normal { role: "system", content: "..." } object inside the messages array. Once you construct the messages cleanly and pass them to the LLM node without overwriting them, the model will follow the instructions consistently.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.