Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Handle Multi-Turn Conversations Failing with Cohere in n8n

Multi-turn conversations with Cohere in n8n fail because the Cohere Chat API expects a specific message format with chat_history arrays, and n8n memory nodes do not automatically map to this structure. Fix this by using a Code node to transform memory output into Cohere's chat_history format, and configure the Postgres or Redis Chat Memory node to store role-tagged messages.

What you'll learn

  • How Cohere's chat_history format differs from OpenAI's messages format
  • How to transform n8n memory node output into Cohere-compatible history
  • How to configure Postgres Chat Memory for persistent multi-turn Cohere conversations
  • How to handle session IDs so each user gets their own conversation thread
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced8 min read30-40 minutesn8n 1.30+, Cohere API v2, Postgres Chat Memory or Redis Chat Memory nodeMarch 2026RapidDev Engineering Team
TL;DR

Multi-turn conversations with Cohere in n8n fail because the Cohere Chat API expects a specific message format with chat_history arrays, and n8n memory nodes do not automatically map to this structure. Fix this by using a Code node to transform memory output into Cohere's chat_history format, and configure the Postgres or Redis Chat Memory node to store role-tagged messages.

Why Multi-Turn Conversations Fail with Cohere in n8n

Cohere's Chat API uses a chat_history parameter that expects an array of objects with role and message fields — different from OpenAI's messages array format. When you connect an n8n memory node (Simple Memory, Postgres Chat Memory, or Redis Chat Memory) to a Cohere HTTP Request node, the conversation history is not automatically formatted correctly. Messages arrive as a flat array or as the wrong field names, causing Cohere to treat every request as a new conversation. This tutorial shows how to bridge n8n's memory system with Cohere's expected format.

Prerequisites

  • A running n8n instance (self-hosted or cloud) on version 1.30 or later
  • A Cohere API key with access to the Chat endpoint
  • PostgreSQL database accessible from n8n (for persistent memory)
  • Basic understanding of n8n Code nodes and HTTP Request nodes
  • Familiarity with REST API request/response formats

Step-by-step guide

1

Set up the Postgres Chat Memory node for session storage

Add a Postgres Chat Memory node to your workflow. This node stores conversation messages in a PostgreSQL table with session IDs, allowing you to retrieve history per user. Configure it with your PostgreSQL credentials, set the Session ID to a dynamic value from your Webhook (e.g., {{ $json.userId }} or {{ $json.sessionId }}), and set the Context Window Length to 20 messages. The memory node stores messages in a generic format — we will transform them for Cohere in the next step.

Expected result: The Postgres Chat Memory node is connected and stores messages with the correct session ID for each user.

2

Retrieve and transform conversation history for Cohere

Add a Code node after your memory retrieval step. The n8n memory nodes return messages in a format like [{role: 'user', content: 'hello'}, {role: 'assistant', content: 'hi'}]. Cohere expects chat_history as [{role: 'USER', message: 'hello'}, {role: 'CHATBOT', message: 'hi'}]. The Code node maps between these formats, converting role names and field names to match Cohere's API specification.

typescript
1// Code node — JavaScript
2// Transform n8n memory format to Cohere chat_history format
3
4const items = $input.all();
5const memoryMessages = items[0].json.memory || items[0].json.messages || [];
6
7// Map n8n roles to Cohere roles
8const roleMap = {
9 'user': 'USER',
10 'human': 'USER',
11 'assistant': 'CHATBOT',
12 'ai': 'CHATBOT',
13 'system': 'SYSTEM'
14};
15
16const chatHistory = memoryMessages
17 .filter(msg => msg.role !== 'system') // Cohere uses preamble, not system in history
18 .map(msg => ({
19 role: roleMap[msg.role?.toLowerCase()] || 'USER',
20 message: msg.content || msg.text || msg.message || ''
21 }));
22
23// Current user message is sent separately, not in chat_history
24const currentMessage = items[0].json.currentMessage
25 || items[0].json.message
26 || items[0].json.text
27 || '';
28
29return [{
30 json: {
31 chatHistory,
32 currentMessage,
33 sessionId: items[0].json.sessionId || 'default'
34 }
35}];

Expected result: The output contains a chatHistory array in Cohere's format and a separate currentMessage field for the latest user input.

3

Configure the HTTP Request node to call Cohere Chat API

Add an HTTP Request node and configure it to call Cohere's Chat endpoint. Set the method to POST, the URL to https://api.cohere.ai/v1/chat, and add headers for Authorization (Bearer your-api-key) and Content-Type (application/json). In the body, use the transformed chatHistory as the chat_history field, the currentMessage as the message field, and optionally set a preamble for system instructions. The preamble acts as Cohere's equivalent of a system message.

typescript
1// HTTP Request node body (JSON)
2{
3 "model": "command-r-plus",
4 "message": "{{ $json.currentMessage }}",
5 "chat_history": {{ JSON.stringify($json.chatHistory) }},
6 "preamble": "You are a helpful assistant. Answer questions clearly and concisely.",
7 "temperature": 0.3,
8 "max_tokens": 1024,
9 "conversation_id": "{{ $json.sessionId }}"
10}

Expected result: Cohere returns a response that is context-aware of previous messages in the conversation.

4

Store the assistant response back into memory

After receiving Cohere's response, you need to store both the user message and the assistant response in your memory node for the next turn. Add a Code node that formats the new messages for the Postgres Chat Memory node. The memory node expects messages with role and content fields. Extract the response text from Cohere's response (located at $json.text in the API response) and pass both messages to the memory node's input.

typescript
1// Code node — JavaScript
2// Prepare messages to store in memory
3
4const cohereResponse = $input.all()[0].json;
5const userMessage = $('Transform History').first().json.currentMessage;
6const assistantMessage = cohereResponse.text || '';
7
8return [
9 {
10 json: {
11 action: 'store',
12 messages: [
13 { role: 'user', content: userMessage },
14 { role: 'assistant', content: assistantMessage }
15 ],
16 sessionId: $('Transform History').first().json.sessionId,
17 responseText: assistantMessage
18 }
19 }
20];

Expected result: Both the user message and Cohere's response are stored in PostgreSQL, ready for retrieval on the next turn.

5

Handle conversation context window limits

Cohere's chat_history can grow large over long conversations, eventually exceeding token limits. Add logic to your transformation Code node to trim the history to the most recent N messages while always preserving the first message (which often contains important context). A good default is 20 messages (10 turns). This prevents 'context length exceeded' errors and keeps API costs predictable.

typescript
1// Add to your Transform History Code node
2const MAX_HISTORY = 20;
3
4let trimmedHistory = chatHistory;
5if (chatHistory.length > MAX_HISTORY) {
6 // Keep first message + most recent messages
7 const first = chatHistory[0];
8 const recent = chatHistory.slice(-(MAX_HISTORY - 1));
9 trimmedHistory = [first, ...recent];
10}
11
12// Use trimmedHistory instead of chatHistory in output

Expected result: Conversations longer than 20 messages are trimmed to stay within Cohere's context window while preserving the opening context.

6

Test the full multi-turn flow

Test the complete workflow by sending multiple messages with the same sessionId through your Webhook. Send at least three messages in sequence and verify that Cohere's responses reference information from earlier in the conversation. Check the Postgres Chat Memory table directly to confirm messages are stored with correct roles and session IDs. Use n8n's execution history to inspect the chatHistory array at each turn and verify it grows correctly.

Expected result: Cohere responds with full context awareness across multiple turns, and the PostgreSQL table shows a clean conversation history with correct session partitioning.

Complete working example

cohere-multi-turn-transformer.js
1// Complete Code node: Transform n8n memory to Cohere chat_history
2// Place between Memory Retrieval and HTTP Request (Cohere) nodes
3
4const MAX_HISTORY = 20;
5
6const items = $input.all();
7const rawMessages = items[0].json.memory || items[0].json.messages || [];
8const currentMessage = items[0].json.currentMessage
9 || items[0].json.message
10 || items[0].json.text
11 || '';
12const sessionId = items[0].json.sessionId || 'default';
13
14// Role mapping: n8n memory format → Cohere format
15const roleMap = {
16 'user': 'USER',
17 'human': 'USER',
18 'assistant': 'CHATBOT',
19 'ai': 'CHATBOT',
20 'system': 'SYSTEM'
21};
22
23// Transform messages
24let chatHistory = rawMessages
25 .filter(msg => msg.role !== 'system')
26 .map(msg => ({
27 role: roleMap[msg.role?.toLowerCase()] || 'USER',
28 message: msg.content || msg.text || msg.message || ''
29 }))
30 .filter(msg => msg.message.trim() !== '');
31
32// Trim to context window
33if (chatHistory.length > MAX_HISTORY) {
34 const first = chatHistory[0];
35 const recent = chatHistory.slice(-(MAX_HISTORY - 1));
36 chatHistory = [first, ...recent];
37}
38
39// Build Cohere request body
40const cohereBody = {
41 model: 'command-r-plus',
42 message: currentMessage,
43 chat_history: chatHistory,
44 preamble: 'You are a helpful assistant. Maintain context across the conversation.',
45 temperature: 0.3,
46 max_tokens: 1024,
47 conversation_id: sessionId
48};
49
50return [{
51 json: {
52 ...cohereBody,
53 _meta: {
54 historyLength: chatHistory.length,
55 sessionId,
56 trimmed: rawMessages.length > MAX_HISTORY
57 }
58 }
59}];

Common mistakes when handling Multi-Turn Conversations Failing with Cohere in n8n

Why it's a problem: Sending the current user message inside chat_history instead of the separate message field

How to avoid: Cohere expects the current message in the 'message' field and only prior messages in 'chat_history'

Why it's a problem: Using 'assistant' as the role in chat_history — Cohere expects 'CHATBOT'

How to avoid: Map n8n's 'assistant' role to 'CHATBOT' and 'user' to 'USER' (uppercase) in the transformation

Why it's a problem: Not filtering out system messages from chat_history — Cohere rejects them

How to avoid: Filter messages with role 'system' and put system instructions in the preamble field instead

Why it's a problem: Using the content field in Cohere history — Cohere uses 'message' not 'content'

How to avoid: Map msg.content from n8n memory to msg.message for Cohere's chat_history format

Why it's a problem: Sharing a single session ID across all users, mixing conversation histories

How to avoid: Generate a unique session ID per user (from webhook payload, auth token, or UUID) and pass it through the workflow

Best practices

  • Always map n8n role names (user/assistant) to Cohere role names (USER/CHATBOT) explicitly
  • Use conversation_id in Cohere API calls to enable server-side conversation tracking as a fallback
  • Set a context window limit (20 messages) to prevent token limit errors on long conversations
  • Store system instructions in the preamble field, not in chat_history
  • Use Postgres Chat Memory over Simple Memory for production — it survives n8n restarts
  • Include the session ID in webhook requests so each user gets an isolated conversation
  • Pin test data with multi-turn conversations to iterate on the transformation logic without API costs
  • Log the transformed chat_history length in each execution for monitoring conversation growth

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm using n8n to build a chatbot with Cohere's Chat API. My multi-turn conversations don't work — Cohere treats every message as a new conversation. How do I transform n8n memory node output into Cohere's chat_history format with the correct role names?

n8n Prompt

In my n8n workflow, the Postgres Chat Memory node stores conversation history but when I send it to Cohere via HTTP Request, the chat_history format is wrong. How do I transform the memory output to use USER/CHATBOT roles and the message field?

Frequently asked questions

Does n8n have a built-in Cohere node that handles multi-turn conversations automatically?

n8n does not have a dedicated Cohere chat node as of version 1.30. You need to use the HTTP Request node with manual chat_history formatting, or use the AI Agent node with a custom LangChain Cohere integration.

Can I use the Simple Memory node instead of Postgres Chat Memory for Cohere conversations?

Yes, but Simple Memory stores data in n8n's process memory, so all conversation history is lost when n8n restarts. Use it only for development and testing.

What is the maximum chat_history length Cohere supports?

Cohere's context window depends on the model — Command R+ supports up to 128K tokens. However, very long histories increase latency and cost. Limit to 20-30 messages and summarize older context if needed.

How do I handle concurrent users with different conversation threads?

Use a unique session ID per user (from authentication, cookies, or a generated UUID). Pass this session ID to both the Postgres Chat Memory node and the Cohere conversation_id parameter.

Why does Cohere respond as if it has no context even though I am sending chat_history?

Check three things: (1) roles must be uppercase USER/CHATBOT, (2) the field name must be 'message' not 'content', and (3) the current message must be in the 'message' parameter, not inside chat_history.

Can RapidDev help build production Cohere chatbot workflows in n8n?

Yes, RapidDev specializes in building production-grade n8n workflows including multi-turn chatbots with persistent memory, rate limiting, and error handling. Their team can help architect the conversation management layer for Cohere or any other LLM provider.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.