Multi-turn conversations with Gemini in n8n fail when thread history is not passed correctly between webhook triggers. Fix this by using a memory node (Postgres Chat Memory or Redis Chat Memory) with a consistent session ID derived from the user. Each new message loads the full conversation history from memory, sends it to Gemini, and saves the response back. This creates seamless follow-up conversations across separate webhook calls.
Why Multi-Turn Conversations Break with Gemini in n8n
Each webhook trigger in n8n starts a fresh execution with no memory of previous calls. When a user sends a follow-up question like 'What about the second option?', the Gemini node has no context about what 'the second option' refers to. The conversation appears new every time. To fix this, you need to persist conversation history between executions using n8n's memory nodes and pass a session ID that identifies the user's conversation thread. This tutorial builds a complete multi-turn chat system with Gemini that maintains context across any number of follow-ups.
Prerequisites
- A running n8n instance (v1.30 or later)
- A Google Gemini API credential configured in n8n
- A PostgreSQL or Redis instance accessible from n8n (for chat memory)
- Basic understanding of n8n AI Agent node and memory concepts
Step-by-step guide
Set up the Webhook to receive messages with session IDs
Set up the Webhook to receive messages with session IDs
Create a new workflow with a Webhook node. Set HTTP Method to POST and Path to /gemini-chat. Set Response Mode to 'Using Respond to Webhook Node'. The incoming payload should include a 'message' field and a 'sessionId' field. If the caller does not provide a sessionId, you will generate one from the user's identifier (IP, userId, or a UUID) in the next step.
1// Expected payload:2// POST /webhook/gemini-chat3// {4// "message": "What are the best restaurants in Paris?",5// "sessionId": "user_42_session_abc123",6// "userId": "user_42"7// }Expected result: Webhook accepts POST requests with message and optional sessionId fields
Add a Code node to ensure a consistent session ID
Add a Code node to ensure a consistent session ID
Add a Code node after the Webhook to normalize the session ID. If the caller provides a sessionId, use it. If not, generate one from the userId or create a new UUID. The session ID must be consistent across multiple requests from the same user so the memory node can load the correct conversation history.
1const input = $input.first().json;23// Use provided sessionId, or derive from userId, or generate new4let sessionId = input.sessionId;5if (!sessionId) {6 if (input.userId) {7 sessionId = `session_${input.userId}_${new Date().toISOString().split('T')[0]}`;8 } else {9 sessionId = `session_${crypto.randomUUID()}`;10 }11}1213return [{14 json: {15 message: input.message,16 sessionId,17 userId: input.userId || 'anonymous'18 }19}];Expected result: Every request has a consistent sessionId that identifies the conversation thread
Configure the AI Agent node with Gemini and Chat Memory
Configure the AI Agent node with Gemini and Chat Memory
Add an AI Agent node. Set the model to Google Gemini (gemini-2.0-flash or gemini-1.5-pro). In the Memory section, add a Postgres Chat Memory node (or Redis Chat Memory). Set the Session ID field to the expression {{ $json.sessionId }}. The memory node automatically loads all previous messages for that session and appends them to the Gemini request as conversation history. Set Context Window Length to a reasonable number (e.g., 20 messages) to prevent token limit issues on long conversations.
1// AI Agent node configuration:2// Model: Google Gemini (gemini-2.0-flash)3// System Message: "You are a helpful travel assistant."4// Memory: Postgres Chat Memory5// - Session ID: {{ $json.sessionId }}6// - Context Window Length: 207// - Connection: Your PostgreSQL credentialExpected result: The AI Agent node loads conversation history from the database and includes it in the Gemini request
Add the Respond to Webhook node to return Gemini's answer
Add the Respond to Webhook node to return Gemini's answer
After the AI Agent node, add a Respond to Webhook node. Set the Response Body to the AI Agent's output text. Also include the sessionId in the response so the caller can use it for subsequent requests. This completes the request-response cycle for each turn in the conversation.
1// Respond to Webhook body expression:2// {3// "response": "{{ $json.output }}",4// "sessionId": "{{ $('Ensure Session ID').first().json.sessionId }}"5// }Expected result: Each webhook call returns Gemini's response along with the sessionId for use in follow-up requests
Test multi-turn conversation flow
Test multi-turn conversation flow
Test the complete flow by sending a series of messages with the same sessionId. First send 'What are the best restaurants in Paris?' with sessionId 'test_session_1'. Then send 'Which one has the best desserts?' with the same sessionId. The second response should reference the restaurants from the first answer, proving that conversation history is being loaded correctly. Then send a message with a different sessionId to verify conversation isolation.
Expected result: Follow-up questions reference context from earlier messages in the same session, and different sessions are isolated
Add session cleanup for expired conversations
Add session cleanup for expired conversations
Create a scheduled workflow that runs daily (using a Schedule Trigger node) to clean up old conversation sessions. Add a Code node or PostgreSQL node that deletes memory records older than your retention period (e.g., 7 days). This prevents the memory table from growing indefinitely and helps with privacy compliance.
1// PostgreSQL node query for cleanup:2// DELETE FROM n8n_chat_histories 3// WHERE created_at < NOW() - INTERVAL '7 days';Expected result: Old conversation sessions are automatically deleted, keeping the memory table manageable
Complete working example
1// ====== Session ID Manager — Code Node ======2// Place after Webhook, before AI Agent node34const input = $input.first().json;56// Normalize session ID7let sessionId = input.sessionId;8if (!sessionId) {9 const userId = input.userId || input.email || 'anonymous';10 // Create a daily session by default11 const dateKey = new Date().toISOString().split('T')[0];12 sessionId = `session_${userId}_${dateKey}`;13}1415// Validate session ID format (alphanumeric, underscores, hyphens)16const sanitizedSessionId = sessionId.replace(/[^a-zA-Z0-9_-]/g, '_').substring(0, 128);1718return [{19 json: {20 message: input.message || '',21 sessionId: sanitizedSessionId,22 userId: input.userId || 'anonymous',23 timestamp: new Date().toISOString(),24 metadata: {25 source: input.source || 'webhook',26 platform: input.platform || 'unknown'27 }28 }29}];3031// ====== Session Cleanup — Scheduled Workflow Code Node ======32// Run via Schedule Trigger (daily at 2 AM)3334// const retentionDays = 7;35// const cutoffDate = new Date();36// cutoffDate.setDate(cutoffDate.getDate() - retentionDays);37//38// return [{39// json: {40// query: `DELETE FROM n8n_chat_histories WHERE created_at < '${cutoffDate.toISOString()}'`,41// retentionDays,42// cutoffDate: cutoffDate.toISOString()43// }44// }];Common mistakes when sending Follow-Up Questions to Gemini with Correct Thread History in n8n
Why it's a problem: Not passing a sessionId with follow-up messages, causing each message to start a new conversation
How to avoid: Return the sessionId in every response and require callers to include it in subsequent requests
Why it's a problem: Using Simple Memory instead of Postgres/Redis Chat Memory, losing all history when n8n restarts
How to avoid: Use Postgres Chat Memory or Redis Chat Memory for production workflows that need persistence
Why it's a problem: Setting Context Window Length too high, causing Gemini token limit errors on long conversations
How to avoid: Set Context Window Length to 15-20 messages for gemini-2.0-flash, or 30-40 for gemini-1.5-pro with its larger context
Why it's a problem: Using the same session ID for all users, mixing everyone's conversation history into one thread
How to avoid: Derive session IDs from unique user identifiers (userId, email, IP) to ensure conversation isolation
Best practices
- Always sanitize session IDs to prevent SQL injection — strip special characters and limit length
- Set a Context Window Length on the memory node to prevent token limit errors on long conversations
- Include the sessionId in your webhook response so callers can use it for follow-ups
- Use Postgres Chat Memory for production (persistent) and Simple Memory for development (in-memory, lost on restart)
- Create daily session IDs by including the date to automatically limit conversation length
- Add a scheduled cleanup workflow to delete old sessions and comply with data retention policies
- Test with concurrent users to ensure session IDs correctly isolate conversations
- Log the session ID and message count per session for monitoring conversation patterns
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need to build a multi-turn conversation system with Google Gemini in n8n where follow-up questions reference earlier messages. How do I use Postgres Chat Memory with session IDs to persist conversation history between webhook triggers?
Add a Webhook → Code node (generate sessionId from userId) → AI Agent (Gemini model + Postgres Chat Memory with sessionId expression {{ $json.sessionId }}) → Respond to Webhook (include sessionId in response). Set Context Window Length to 20 in the memory node.
Frequently asked questions
Which memory node should I use for Gemini conversations in n8n?
Use Postgres Chat Memory for production workflows because it persists data to a PostgreSQL database that survives n8n restarts. Use Redis Chat Memory if you need faster reads and can tolerate data loss on Redis restarts. Use Simple Memory only for testing.
How many messages can I store in a session before hitting Gemini's token limit?
Gemini 2.0 Flash has a 1 million token context window, but practical limits are lower due to cost and latency. Set Context Window Length to 15-20 messages for typical conversations. For long-form sessions, consider summarizing older messages instead of passing raw history.
Can I use the same memory table for both Gemini and Claude conversations?
Yes. n8n's memory nodes store messages in a provider-agnostic format. Different session IDs isolate conversations, so you can use one memory table across multiple LLM providers.
How do I handle users who do not provide a session ID?
Generate one automatically from available identifiers: userId, email, IP address, or a UUID. Return the generated sessionId in the response so the caller can include it in future requests.
What happens if two users share the same session ID?
Their messages will be interleaved in the same conversation history, causing confused responses. Always derive session IDs from unique user identifiers and validate uniqueness in your Code node.
Can RapidDev help build a production multi-turn chatbot with Gemini in n8n?
Yes. RapidDev builds conversational AI systems in n8n with persistent memory, session management, and multi-model support for teams that need reliable, scalable chatbot infrastructure.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation