Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Pass Conversation History to Anthropic Claude Reliably in n8n

Passing conversation history to Claude in n8n fails when messages are in the wrong format, roles alternate incorrectly, or the memory node loses context between executions. Fix this by using the Postgres Chat Memory node with proper session IDs, enforcing strict role alternation in a Code node, and formatting messages to match Claude's required user/assistant pattern before each API call.

What you'll learn

  • Claude's exact message format requirements and common violations
  • How to configure Postgres Chat Memory for persistent conversation storage
  • How to enforce role alternation and merge consecutive same-role messages
  • How to handle edge cases like system messages, empty messages, and long histories
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced9 min read30-40 minutesn8n 1.30+, Anthropic Claude 3/3.5 models, Postgres Chat Memory or Simple Memory nodeMarch 2026RapidDev Engineering Team
TL;DR

Passing conversation history to Claude in n8n fails when messages are in the wrong format, roles alternate incorrectly, or the memory node loses context between executions. Fix this by using the Postgres Chat Memory node with proper session IDs, enforcing strict role alternation in a Code node, and formatting messages to match Claude's required user/assistant pattern before each API call.

Why Conversation History Breaks with Claude in n8n

Claude's Messages API requires a strict message format: messages must alternate between user and assistant roles, the first message must always be from the user, and system instructions go in a separate system parameter. n8n's memory nodes store messages in a generic format that does not enforce these rules, leading to 400 errors like 'messages must alternate between user and assistant roles' or silent context loss where Claude ignores previous messages. This tutorial shows how to build a reliable conversation pipeline that transforms memory output into Claude's exact format.

Prerequisites

  • A running n8n instance (self-hosted or cloud) on version 1.30 or later
  • An Anthropic API credential configured in n8n
  • A PostgreSQL database for persistent conversation storage
  • Basic understanding of n8n memory nodes and the Anthropic Messages API
  • A workflow with a Webhook or Chat Trigger for user input

Step-by-step guide

1

Configure the Postgres Chat Memory node

Add a Postgres Chat Memory node to your workflow. Set the PostgreSQL credential, the Session ID Key to a dynamic expression that identifies each unique conversation (e.g., {{ $json.sessionId }} from the webhook payload), and the Context Window Length to 30 messages. The memory node creates a table in PostgreSQL to store messages with their roles, content, and session IDs. This gives you persistent memory that survives n8n restarts.

Expected result: The Postgres Chat Memory node is configured and connected, ready to store and retrieve conversation messages per session.

2

Retrieve and validate conversation history

After retrieving messages from memory, add a Code node to validate and fix the conversation history before sending to Claude. Claude requires: (1) messages must alternate user/assistant, (2) the first message must be role 'user', (3) there must be no consecutive messages with the same role, and (4) system messages are not part of the messages array. This Code node enforces all four rules by merging consecutive same-role messages and removing system messages.

typescript
1// Code node — JavaScript
2// Validate and fix conversation history for Claude
3
4const items = $input.all();
5const rawMessages = items[0].json.memory
6 || items[0].json.messages
7 || items[0].json.chatHistory
8 || [];
9
10const currentMessage = items[0].json.message
11 || items[0].json.text
12 || items[0].json.chatInput
13 || '';
14
15// Step 1: Extract system messages for the system parameter
16const systemMessages = rawMessages
17 .filter(m => m.role === 'system')
18 .map(m => m.content || m.text)
19 .join('\n');
20
21// Step 2: Filter to only user/assistant messages
22let messages = rawMessages
23 .filter(m => m.role === 'user' || m.role === 'assistant' || m.role === 'human' || m.role === 'ai')
24 .map(m => ({
25 role: (m.role === 'human' || m.role === 'user') ? 'user' : 'assistant',
26 content: m.content || m.text || m.message || ''
27 }))
28 .filter(m => m.content.trim() !== '');
29
30// Step 3: Merge consecutive same-role messages
31const merged = [];
32for (const msg of messages) {
33 if (merged.length > 0 && merged[merged.length - 1].role === msg.role) {
34 merged[merged.length - 1].content += '\n' + msg.content;
35 } else {
36 merged.push({ ...msg });
37 }
38}
39
40// Step 4: Ensure first message is from user
41if (merged.length > 0 && merged[0].role !== 'user') {
42 merged.shift(); // Remove leading assistant message
43}
44
45// Step 5: Add current user message
46if (currentMessage.trim()) {
47 if (merged.length > 0 && merged[merged.length - 1].role === 'user') {
48 merged[merged.length - 1].content += '\n' + currentMessage;
49 } else {
50 merged.push({ role: 'user', content: currentMessage });
51 }
52}
53
54return [{
55 json: {
56 messages: merged,
57 system: systemMessages || 'You are a helpful assistant.',
58 messageCount: merged.length,
59 sessionId: items[0].json.sessionId || 'default'
60 }
61}];

Expected result: The output contains a messages array with strictly alternating user/assistant roles, a system field for system instructions, and no empty or duplicate messages.

3

Send the formatted history to Claude via the Anthropic node

If using the built-in Anthropic node with the AI Agent, connect the Postgres Chat Memory node directly to the memory input. The AI Agent + Anthropic node combination handles message formatting automatically. If using the HTTP Request node to call Claude's API directly, use the pre-formatted messages from the previous Code node. Set the system parameter separately from the messages array — this is Claude's key difference from OpenAI's API.

typescript
1// HTTP Request node body for direct Claude API call
2{
3 "model": "claude-3-5-sonnet-20241022",
4 "max_tokens": 2048,
5 "system": "{{ $json.system }}",
6 "messages": {{ JSON.stringify($json.messages) }}
7}

Expected result: Claude receives properly formatted conversation history and responds with full context awareness of previous messages.

4

Store the new exchange in memory

After receiving Claude's response, store both the user message and Claude's response in the Postgres Chat Memory for the next turn. If using the AI Agent node, this happens automatically. If using HTTP Request, add a Code node to prepare the messages for storage and connect it back to the memory node or insert directly into the PostgreSQL table.

typescript
1// Code node — JavaScript
2// Store the conversation turn in memory
3
4const claudeResponse = $input.first().json;
5const userMessage = $('Format History').first().json.messages.slice(-1)[0]?.content || '';
6const assistantMessage = claudeResponse.content?.[0]?.text || '';
7
8return [{
9 json: {
10 sessionId: $('Format History').first().json.sessionId,
11 userMessage,
12 assistantMessage,
13 // For Postgres node direct insert:
14 role_user: 'user',
15 content_user: userMessage,
16 role_assistant: 'assistant',
17 content_assistant: assistantMessage
18 }
19}];

Expected result: Both the user message and Claude's response are persisted in PostgreSQL, correctly tagged with the session ID for future retrieval.

5

Handle context window limits with summarization

Long conversations will eventually exceed Claude's context window (200K tokens for Claude 3.5 Sonnet). Instead of hard-truncating old messages (which loses context), implement a summarization strategy: when the message count exceeds 40, send the oldest 30 messages to Claude with a prompt to summarize the conversation so far, replace those 30 messages with the summary as a single user message, and keep the most recent 10 messages intact.

typescript
1// Add to the Format History Code node
2const MAX_MESSAGES = 40;
3const KEEP_RECENT = 10;
4
5if (merged.length > MAX_MESSAGES) {
6 const oldMessages = merged.slice(0, merged.length - KEEP_RECENT);
7 const recentMessages = merged.slice(-KEEP_RECENT);
8
9 // Flag for summarization (handle in a separate branch)
10 return [{
11 json: {
12 needsSummarization: true,
13 oldMessages,
14 recentMessages,
15 system: systemMessages || 'You are a helpful assistant.',
16 sessionId: items[0].json.sessionId || 'default'
17 }
18 }];
19}
20
21// Normal flow continues if under limit

Expected result: Long conversations are automatically summarized, keeping the context window manageable while preserving important context.

Complete working example

claude-conversation-formatter.js
1// Complete Code node: Format conversation history for Claude API
2// Place between Memory Retrieval and Anthropic/HTTP Request node
3
4const MAX_MESSAGES = 40;
5const KEEP_RECENT = 10;
6
7const items = $input.all();
8const rawMessages = items[0].json.memory
9 || items[0].json.messages
10 || items[0].json.chatHistory
11 || [];
12
13const currentMessage = items[0].json.message
14 || items[0].json.text
15 || items[0].json.chatInput
16 || '';
17
18const sessionId = items[0].json.sessionId || 'default';
19
20// Extract system messages
21const systemParts = rawMessages
22 .filter(m => m.role === 'system')
23 .map(m => m.content || m.text);
24const system = systemParts.length > 0
25 ? systemParts.join('\n')
26 : 'You are a helpful assistant.';
27
28// Filter and normalize to user/assistant only
29let messages = rawMessages
30 .filter(m => ['user', 'assistant', 'human', 'ai'].includes(m.role))
31 .map(m => ({
32 role: ['human', 'user'].includes(m.role) ? 'user' : 'assistant',
33 content: (m.content || m.text || m.message || '').trim()
34 }))
35 .filter(m => m.content !== '');
36
37// Merge consecutive same-role messages
38const merged = [];
39for (const msg of messages) {
40 if (merged.length > 0 && merged[merged.length - 1].role === msg.role) {
41 merged[merged.length - 1].content += '\n\n' + msg.content;
42 } else {
43 merged.push({ ...msg });
44 }
45}
46
47// Ensure starts with user
48while (merged.length > 0 && merged[0].role !== 'user') {
49 merged.shift();
50}
51
52// Add current message
53if (currentMessage.trim()) {
54 if (merged.length > 0 && merged[merged.length - 1].role === 'user') {
55 merged[merged.length - 1].content += '\n\n' + currentMessage.trim();
56 } else {
57 merged.push({ role: 'user', content: currentMessage.trim() });
58 }
59}
60
61// Ensure ends with user (Claude expects the last message to be from user)
62if (merged.length > 0 && merged[merged.length - 1].role !== 'user') {
63 merged.push({ role: 'user', content: 'Please continue.' });
64}
65
66// Check if summarization is needed
67const needsSummarization = merged.length > MAX_MESSAGES;
68
69return [{
70 json: {
71 model: 'claude-3-5-sonnet-20241022',
72 max_tokens: 2048,
73 system,
74 messages: needsSummarization
75 ? merged.slice(-KEEP_RECENT)
76 : merged,
77 needsSummarization,
78 messageCount: merged.length,
79 sessionId
80 }
81}];

Common mistakes when passing Conversation History to Anthropic Claude Reliably in n8n

Why it's a problem: Putting system instructions in the messages array instead of the system parameter

How to avoid: Claude's API uses a separate 'system' parameter — extract system messages from history and pass them there

Why it's a problem: Sending consecutive messages with the same role (two user messages in a row)

How to avoid: Merge consecutive same-role messages into a single message with content joined by newlines

Why it's a problem: Starting the messages array with an assistant message

How to avoid: Claude requires the first message to be from the user — remove or reorder leading assistant messages

Why it's a problem: Using Simple Memory for production, losing all history on n8n restart

How to avoid: Use Postgres Chat Memory or Redis Chat Memory for production — they persist across restarts

Why it's a problem: Not isolating conversations by session ID, causing users to see each other's messages

How to avoid: Set the Session ID Key in the memory node to a per-user or per-conversation identifier from the webhook

Best practices

  • Always validate that messages alternate between user and assistant roles before sending to Claude
  • Use the system parameter for system instructions — never put them in the messages array
  • Merge consecutive same-role messages instead of sending them separately
  • Ensure the first message is always from the user role
  • Use Postgres Chat Memory for production and Simple Memory for development
  • Set a context window limit and implement summarization for long conversations
  • Include the session ID in every webhook request to isolate user conversations
  • Use the AI Agent node with Anthropic model for automatic message formatting when possible

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n workflow sends conversation history to Claude but gets '400 messages must alternate between user and assistant roles'. How do I transform n8n memory node output to match Claude's exact message format requirements?

n8n Prompt

I'm using the Postgres Chat Memory node in n8n with an HTTP Request node to call Claude's Messages API. How do I format the memory output into Claude's required alternating user/assistant message format?

Frequently asked questions

Does the n8n AI Agent node with Anthropic model handle message formatting automatically?

Yes, when you connect a memory node to the AI Agent node's memory input and select an Anthropic model, n8n handles role alternation and system message separation automatically. Manual formatting is only needed when using the HTTP Request node.

Can I use the same conversation history format for both Claude and OpenAI?

No. Claude uses a separate 'system' parameter and requires strict user/assistant alternation. OpenAI allows system messages in the messages array and is more lenient about role ordering. Transform the history per provider.

What happens if I send an empty messages array to Claude?

Claude returns a 400 error. Always ensure at least one user message is in the array. Add a fallback user message like 'Hello' if the array would otherwise be empty.

How many messages can I send in Claude's messages array?

There is no hard limit on message count, but the total token count (all messages + system + response) must fit within the model's context window: 200K tokens for Claude 3.5 Sonnet. Implement summarization for conversations approaching this limit.

Why does Claude seem to forget context even though I am sending history?

Check three things: (1) verify the messages array is not empty in the execution output, (2) confirm the session ID is consistent across turns, and (3) ensure you are storing the assistant response after each turn.

Can RapidDev help build multi-turn Claude chatbots with persistent memory in n8n?

Yes, RapidDev builds production chatbot workflows in n8n with persistent memory, proper Claude message formatting, session management, and conversation summarization. Their team handles the edge cases around role alternation, context window management, and multi-user isolation.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.