AI chatbots triggered by webhooks in n8n lose conversation context between requests because each webhook trigger starts a fresh execution. Fix this by using a session ID from the incoming request, storing conversation history in PostgreSQL or Redis via dedicated n8n nodes, and loading it back at the start of each execution to maintain multi-turn conversations.
Why Webhook-Triggered AI Agents Lose Conversation State
Each webhook trigger in n8n creates an independent execution with no memory of previous ones. When building a chatbot that receives messages via webhook, the AI agent has no access to earlier messages unless you explicitly load them. This tutorial walks you through building a stateful chatbot pattern: extracting a session ID from the webhook payload, loading conversation history from a database, appending the new message, calling the LLM with full context, and saving the updated history back.
Prerequisites
- A running n8n instance (v1.30 or later)
- A PostgreSQL database accessible from n8n
- PostgreSQL credentials configured in n8n
- An OpenAI or Anthropic API key configured as a credential
- Basic understanding of n8n webhook nodes and expressions
Step-by-step guide
Create the conversation history table in PostgreSQL
Create the conversation history table in PostgreSQL
Connect to your PostgreSQL database and create a table to store conversation messages. Each row represents one message in a conversation, identified by a session_id. The table stores the role (user or assistant), the message content, and a timestamp. Adding an index on session_id ensures fast lookups when loading history.
1CREATE TABLE IF NOT EXISTS conversation_history (2 id SERIAL PRIMARY KEY,3 session_id VARCHAR(255) NOT NULL,4 role VARCHAR(20) NOT NULL,5 content TEXT NOT NULL,6 created_at TIMESTAMP DEFAULT NOW()7);89CREATE INDEX idx_session_id ON conversation_history(session_id);1011-- Optional: auto-delete conversations older than 24 hours12-- CREATE OR REPLACE FUNCTION cleanup_old_conversations()13-- RETURNS void AS $$14-- DELETE FROM conversation_history WHERE created_at < NOW() - INTERVAL '24 hours';15-- $$ LANGUAGE sql;Expected result: The conversation_history table exists in your PostgreSQL database with an index on session_id.
Set up the Webhook node to receive messages
Set up the Webhook node to receive messages
Add a Webhook node as the trigger for your workflow. Set the HTTP Method to POST. Configure the path (for example, /chat). The webhook expects a JSON body with at least two fields: sessionId (a unique string identifying the conversation) and message (the user's text). Set the Response Mode to 'When Last Node Finishes' so the caller receives the AI response synchronously.
Expected result: The Webhook node accepts POST requests and extracts sessionId and message from the request body.
Load existing conversation history from PostgreSQL
Load existing conversation history from PostgreSQL
Add a Postgres node after the Webhook node. Set the operation to Execute Query. Write a SELECT query that fetches all messages for the current session, ordered by creation time. Use the expression {{ $json.body.sessionId }} to inject the session ID into the query. Limit the results to the most recent 20 messages to stay within token limits.
1SELECT role, content FROM conversation_history2WHERE session_id = '{{ $json.body.sessionId }}'3ORDER BY created_at ASC4LIMIT 20;Expected result: The Postgres node returns an array of previous messages for this session, or an empty array for new sessions.
Format the conversation history for the LLM
Format the conversation history for the LLM
Add a Code node to transform the database rows into the messages array format expected by the LLM. The Code node reads all items from the Postgres node and builds an array of { role, content } objects. It then appends the new user message from the webhook payload. This complete messages array is passed to the LLM node.
1const history = $input.all().map(item => ({2 role: item.json.role,3 content: item.json.content4}));56// Get the new user message from the webhook7const webhookData = $('Webhook').first().json.body;8history.push({ role: 'user', content: webhookData.message });910return [{ json: { messages: history, sessionId: webhookData.sessionId } }];Expected result: The Code node outputs a single item with a messages array containing the full conversation history plus the new user message.
Call the LLM with the full conversation context
Call the LLM with the full conversation context
Add an OpenAI node (or HTTP Request node for other providers) after the Code node. Configure it to use the Chat Completions endpoint. Set the Messages parameter to the expression {{ $json.messages }}. Add a system message defining the assistant's behavior. The LLM now receives the full conversation history and can respond contextually.
Expected result: The LLM responds with awareness of the full conversation history, producing contextually relevant answers.
Save both the user message and the assistant response to PostgreSQL
Save both the user message and the assistant response to PostgreSQL
Add two Postgres nodes after the LLM node (or one with a multi-row insert). Insert the user's message and the assistant's response into the conversation_history table. Use the sessionId from the earlier Code node and the response content from the LLM node. This ensures the next webhook trigger has access to the complete conversation.
1INSERT INTO conversation_history (session_id, role, content)2VALUES3 ('{{ $('Code').first().json.sessionId }}', 'user', '{{ $('Webhook').first().json.body.message }}'),4 ('{{ $('Code').first().json.sessionId }}', 'assistant', '{{ $json.message.content }}');Expected result: Both the user message and the AI response are stored in PostgreSQL, ready for the next execution.
Return the response to the webhook caller
Return the response to the webhook caller
Add a Respond to Webhook node at the end of the workflow. Set the response body to include the assistant's message. Since the Webhook node's Response Mode is set to 'When Last Node Finishes', the Respond to Webhook node sends the AI response back to the caller as the HTTP response. Include the session ID in the response so the client can use it for follow-up messages.
1{2 "sessionId": "{{ $('Code').first().json.sessionId }}",3 "response": "{{ $json.message.content }}"4}Expected result: The webhook caller receives the AI response as a JSON object with the session ID and response text.
Complete working example
1// Code node: Format conversation history for LLM2// Input: Postgres node output (previous messages) + Webhook data3// Output: messages array ready for OpenAI / Anthropic45const MAX_HISTORY = 20;67// Load previous messages from Postgres node output8const dbRows = $input.all();9const history = dbRows10 .filter(item => item.json.role && item.json.content)11 .map(item => ({12 role: item.json.role,13 content: item.json.content14 }));1516// Trim to last MAX_HISTORY messages to stay within token limits17const trimmedHistory = history.slice(-MAX_HISTORY);1819// Get the new user message from the Webhook node20const webhookData = $('Webhook').first().json.body;21const sessionId = webhookData.sessionId || `session_${Date.now()}`;22const userMessage = webhookData.message;2324if (!userMessage || userMessage.trim() === '') {25 return [{26 json: {27 error: 'No message provided',28 sessionId29 }30 }];31}3233// Append the new user message34trimmedHistory.push({ role: 'user', content: userMessage });3536// Build the system message37const systemMessage = {38 role: 'system',39 content: 'You are a helpful assistant. Be concise and accurate.'40};4142// Combine system message with conversation history43const messages = [systemMessage, ...trimmedHistory];4445return [{46 json: {47 messages,48 sessionId,49 messageCount: trimmedHistory.length50 }51}];Common mistakes when storing Conversation State Between Webhook Triggers for AI Agents in n8n
Why it's a problem: Using $getWorkflowStaticData to store conversation history
How to avoid: Static data is lost on restart and shared across all users. Use PostgreSQL or Redis for production conversation storage.
Why it's a problem: Not limiting the number of messages loaded from the database
How to avoid: Add LIMIT 20 to your SELECT query and ORDER BY created_at ASC to keep the most recent messages within token limits.
Why it's a problem: Storing the system message in the database alongside user and assistant messages
How to avoid: Keep the system message in the Code node and prepend it at runtime. This lets you update system instructions without altering stored history.
Why it's a problem: Setting the Webhook Response Mode to 'Immediately' and expecting the AI response to be returned
How to avoid: Set the Response Mode to 'When Last Node Finishes' or use a Respond to Webhook node at the end of the workflow.
Best practices
- Always generate a session ID on the client side and include it in every webhook request
- Limit conversation history to the most recent 20-30 messages to avoid exceeding token limits
- Use PostgreSQL for production conversation storage — workflow static data does not survive restarts
- Add a TTL mechanism to delete conversations older than 24 hours to manage database size
- Use parameterized queries instead of string interpolation to prevent SQL injection
- Include the system message at the beginning of the messages array, not in the stored history
- Return the session ID in the webhook response so the client can maintain the conversation
- Add error handling for database connection failures with a fallback response
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm building a chatbot in n8n that receives messages via webhook. Each webhook trigger is a new execution so the AI has no memory of previous messages. How do I store conversation history in PostgreSQL and load it before each LLM call?
Create an n8n workflow with a Webhook trigger that receives a sessionId and message. Load previous messages from a PostgreSQL conversation_history table, format them as a messages array, call OpenAI with the full context, save the response back to PostgreSQL, and return the answer via Respond to Webhook node.
Frequently asked questions
Can I use Redis instead of PostgreSQL for conversation history in n8n?
Yes. Use the Redis node to store conversation history as a JSON string with the session ID as the key. Set a TTL on the key to auto-expire old conversations. Redis is faster for read-heavy workloads but less durable than PostgreSQL.
How do I handle multiple users sending messages at the same time?
Each user should have a unique session ID. Since each webhook trigger creates a separate execution, n8n handles concurrent requests naturally. The session ID ensures each user's history is loaded independently.
What happens if the PostgreSQL connection fails mid-conversation?
Add an Error Trigger workflow or use the IF node after the Postgres node to check for errors. Return a friendly fallback message to the user and log the error for investigation.
How many messages can I store before hitting LLM token limits?
This depends on the model. GPT-4o supports 128K tokens, Claude 3.5 supports 200K tokens. A safe default is 20-30 messages. For longer conversations, summarize older messages into a single context block.
Can I use the n8n Memory nodes instead of building my own storage?
Yes, n8n's built-in Memory nodes (Window Buffer Memory, Postgres Chat Memory) work with AI Agent nodes. This tutorial covers the manual approach for maximum control, but Memory nodes are a simpler alternative for standard chatbot patterns.
Can RapidDev help me build a production-grade stateful chatbot on n8n?
Yes. RapidDev specializes in building production n8n workflows with proper state management, error handling, and scalability patterns for AI-powered chatbots.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation