AI agent sessions in n8n are stateless by default — each webhook trigger starts a fresh execution with no memory of previous interactions. Persist user session data by using the Postgres Chat Memory node for conversation history, the static data feature for lightweight per-workflow state, and a dedicated PostgreSQL session table for structured user metadata like preferences, names, and interaction counts.
Why AI Agent Sessions Lose State in n8n
Every n8n workflow execution is independent — when a webhook fires, it starts a completely new execution with no access to previous executions' data. For AI agents, this means the agent forgets the user's name, preferences, conversation history, and any custom context between interactions. n8n provides several mechanisms to bridge this gap: memory nodes for conversation history, static data for simple key-value storage, and database nodes for structured session data. This tutorial shows how to combine these mechanisms into a complete session persistence layer for AI agents.
Prerequisites
- A running n8n instance (self-hosted or cloud) on version 1.30 or later
- A PostgreSQL database accessible from n8n
- An AI Agent workflow with a Webhook or Chat Trigger
- An LLM credential (OpenAI, Anthropic, or Mistral)
- Basic understanding of SQL and n8n Code nodes
Step-by-step guide
Design the session data model
Design the session data model
Before building, plan what data needs to persist across executions. There are three categories: (1) Conversation history — the actual messages exchanged between user and agent, handled by memory nodes. (2) User metadata — name, preferences, language, timezone, stored in a database table. (3) Workflow state — counters, flags, last-seen timestamps, stored in static data or database. Each category uses a different persistence mechanism because they have different access patterns and lifetimes.
Expected result: A clear plan for what data goes in memory nodes (conversation), database tables (user metadata), and static data (workflow state).
Create the session database table
Create the session database table
Create a PostgreSQL table to store structured user session data. This table holds metadata that the AI agent needs across conversations — things like the user's name, preferences, interaction count, and custom fields. This is separate from conversation history (handled by the memory node) because it is structured data that you query and update, not a message log.
1-- PostgreSQL: Create session data table2CREATE TABLE IF NOT EXISTS agent_sessions (3 session_id VARCHAR(255) PRIMARY KEY,4 user_name VARCHAR(255),5 user_email VARCHAR(255),6 language VARCHAR(10) DEFAULT 'en',7 timezone VARCHAR(50),8 preferences JSONB DEFAULT '{}',9 interaction_count INTEGER DEFAULT 0,10 last_topic TEXT,11 custom_context TEXT,12 first_seen TIMESTAMP DEFAULT NOW(),13 last_seen TIMESTAMP DEFAULT NOW(),14 updated_at TIMESTAMP DEFAULT NOW()15);1617CREATE INDEX idx_sessions_last_seen ON agent_sessions(last_seen);Expected result: The agent_sessions table exists in PostgreSQL, ready to store structured user data per session.
Load session data at the start of each execution
Load session data at the start of each execution
At the beginning of your workflow (right after the Webhook or Chat Trigger), add a Postgres node that queries the session table for the current user's session ID. If the session exists, load the user's name, preferences, and other metadata. If not, create a new session record. This data is then available to all downstream nodes including the AI Agent's system prompt.
1// Postgres node — Query: Load or create session2// Operation: Execute Query3// Query:4INSERT INTO agent_sessions (session_id, last_seen, interaction_count)5VALUES ('{{ $json.sessionId }}', NOW(), 1)6ON CONFLICT (session_id)7DO UPDATE SET8 last_seen = NOW(),9 interaction_count = agent_sessions.interaction_count + 110RETURNING *;Expected result: The session record is loaded (or created) and available as $json with all user metadata fields.
Inject session context into the AI Agent's system prompt
Inject session context into the AI Agent's system prompt
Use the loaded session data to personalize the AI agent's behavior. In the AI Agent node's system message (or in a Set node that feeds the system prompt), include the user's name, preferences, and any custom context. This gives the agent continuity even though it is a fresh execution — the agent 'remembers' the user by reading from the database.
1// Code node — JavaScript2// Build personalized system prompt from session data34const session = $('Load Session').first().json;56let systemPrompt = 'You are a helpful AI assistant.';78if (session.user_name) {9 systemPrompt += ` The user's name is ${session.user_name}.`;10}1112if (session.language && session.language !== 'en') {13 systemPrompt += ` Respond in ${session.language}.`;14}1516if (session.last_topic) {17 systemPrompt += ` In the last conversation, you were discussing: ${session.last_topic}.`;18}1920if (session.preferences && Object.keys(session.preferences).length > 0) {21 systemPrompt += ` User preferences: ${JSON.stringify(session.preferences)}.`;22}2324if (session.interaction_count > 1) {25 systemPrompt += ` This is interaction #${session.interaction_count} with this user.`;26}2728return [{ json: { systemPrompt, sessionId: session.session_id } }];Expected result: The system prompt includes personalized context from the session database, giving the agent continuity across executions.
Connect the Postgres Chat Memory node for conversation history
Connect the Postgres Chat Memory node for conversation history
Add the Postgres Chat Memory node and connect it to the AI Agent node's memory input. Set the Session ID Key to the same session ID used in the session table ({{ $json.sessionId }}). Set the Context Window Length to 30 messages. This node automatically stores and retrieves conversation messages, providing the AI agent with full conversational context. The combination of structured session data (from the table) and conversation history (from the memory node) gives the agent complete user context.
Expected result: The AI Agent has both structured session data in its system prompt and full conversation history from the memory node.
Update session data after each interaction
Update session data after each interaction
After the AI agent responds, update the session table with any new information learned during the interaction. Use a Code node to extract relevant metadata from the conversation (like the user's name if they mentioned it, or the topic they discussed) and update the session record. This ensures that the next execution has the latest context.
1// Code node — JavaScript2// Extract and update session metadata after AI response34const agentResponse = $input.first().json;5const sessionId = $('Build System Prompt').first().json.sessionId;6const userMessage = $('Webhook').first().json.message || '';7const responseText = agentResponse.output || agentResponse.text || '';89// Simple topic extraction (last 50 chars of user message)10const lastTopic = userMessage.substring(0, 200);1112// Check if user mentioned their name13const nameMatch = userMessage.match(/(?:my name is|I'm|I am)\s+([A-Z][a-z]+)/i);14const detectedName = nameMatch ? nameMatch[1] : null;1516const updateFields = {17 last_topic: lastTopic,18 sessionId19};2021if (detectedName) {22 updateFields.user_name = detectedName;23}2425return [{ json: updateFields }];Expected result: The session table is updated with the conversation topic and any detected user metadata after each interaction.
Use static data for lightweight workflow state
Use static data for lightweight workflow state
For simple counters and flags that apply to the entire workflow (not per-user), use n8n's static data feature. Static data persists across executions within the same workflow without needing a database. Access it with $getWorkflowStaticData('global') in Code nodes. Use it for things like total request counters, last-error timestamps, or feature flags. Do not use it for per-user data — it is shared across all executions of the workflow.
1// Code node — JavaScript2// Use static data for workflow-level state34const staticData = $getWorkflowStaticData('global');56// Initialize counters on first run7if (!staticData.totalRequests) {8 staticData.totalRequests = 0;9 staticData.lastErrorTime = null;10}1112staticData.totalRequests += 1;13staticData.lastExecution = new Date().toISOString();1415// Check if we should rate limit16const shouldThrottle = staticData.totalRequests > 1000;1718return [{19 json: {20 totalRequests: staticData.totalRequests,21 shouldThrottle,22 lastExecution: staticData.lastExecution23 }24}];Expected result: Workflow-level state (counters, flags, timestamps) persists across executions without database queries.
Complete working example
1// Complete Code node: Session manager for AI agents2// Place at workflow start, after Webhook/Chat Trigger34const items = $input.all();5const input = items[0].json;67// Get or generate session ID8const sessionId = input.sessionId9 || input.userId10 || input.headers?.['x-session-id']11 || `anon_${Date.now()}`;1213// Get workflow-level static data14const staticData = $getWorkflowStaticData('global');15if (!staticData.activeSessions) {16 staticData.activeSessions = {};17}1819// Track active sessions in static data (lightweight)20const now = Date.now();21staticData.activeSessions[sessionId] = now;2223// Clean up stale sessions (older than 24 hours)24for (const [sid, timestamp] of Object.entries(staticData.activeSessions)) {25 if (now - timestamp > 86400000) {26 delete staticData.activeSessions[sid];27 }28}2930// Prepare session load query31const sessionQuery = `32 INSERT INTO agent_sessions (session_id, last_seen, interaction_count)33 VALUES ('${sessionId}', NOW(), 1)34 ON CONFLICT (session_id)35 DO UPDATE SET36 last_seen = NOW(),37 interaction_count = agent_sessions.interaction_count + 138 RETURNING *;39`;4041return [{42 json: {43 sessionId,44 userMessage: input.message || input.text || input.chatInput || '',45 sessionQuery,46 activeSessionCount: Object.keys(staticData.activeSessions).length,47 isReturningUser: staticData.activeSessions[sessionId] !== now48 }49}];Common mistakes when persisting User Session Data for an AI Agent Across Executions in
Why it's a problem: Using static data to store per-user session data — it is shared across all users
How to avoid: Use a database table (PostgreSQL) for per-user data. Static data is global to the workflow, not per-session.
Why it's a problem: Not generating consistent session IDs, causing the agent to forget users between requests
How to avoid: Require a session ID in the webhook payload or generate one from a stable user identifier (email, auth token)
Why it's a problem: Storing everything in the conversation history instead of structured session data
How to avoid: Conversation history is for messages. User metadata (name, preferences) belongs in a database table for reliable retrieval.
Why it's a problem: Not updating the session table after each interaction, losing newly learned context
How to avoid: Add a session update step after the AI agent responds to capture topic, detected entities, and updated preferences
Why it's a problem: Using Simple Memory node in production, losing all conversation history on n8n restart
How to avoid: Use Postgres Chat Memory or Redis Chat Memory for production deployments
Best practices
- Use three persistence layers: memory nodes for conversation, database for user metadata, static data for workflow state
- Always use the same session ID across memory nodes and database queries for consistency
- Use PostgreSQL UPSERT (ON CONFLICT) for atomic session creation and updates
- Clean up stale sessions periodically to prevent database bloat
- Inject session data into the system prompt so the AI agent has context without manual retrieval
- Store structured preferences as JSONB in PostgreSQL for flexible schema evolution
- Use static data only for workflow-level (not user-level) state like counters and flags
- Set a reasonable Context Window Length (30-50 messages) on the memory node to balance context and token usage
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My n8n AI agent chatbot forgets user context between webhook calls. How do I persist conversation history and user metadata (name, preferences) across executions using PostgreSQL and memory nodes?
How do I use the Postgres Chat Memory node together with a custom session table to give my n8n AI Agent persistent memory and user-specific context across webhook executions?
Frequently asked questions
What is the difference between the Postgres Chat Memory node and a custom session table?
The Postgres Chat Memory node stores conversation messages (the actual chat). A custom session table stores structured user metadata (name, preferences, interaction count). Use both together for a complete session persistence layer.
Can I use Redis instead of PostgreSQL for session data?
Yes, n8n has a Redis Chat Memory node for conversation history. For structured session data, use the Redis node with GET/SET operations. Redis is faster but has no built-in query language for complex aggregations.
How do I clean up old sessions to prevent database bloat?
Create a scheduled workflow that runs weekly and deletes sessions with last_seen older than 30 days: DELETE FROM agent_sessions WHERE last_seen < NOW() - INTERVAL '30 days'.
Does n8n's static data survive n8n restarts?
Yes, static data is persisted to n8n's database (not just memory). It survives restarts. However, it is not designed for large volumes of data — use it for simple counters and flags only.
How do I handle concurrent requests from the same user?
PostgreSQL's UPSERT with ON CONFLICT handles concurrent writes safely. For conversation history, the memory node manages concurrent access. Avoid using static data for per-user state as it has no concurrency control.
Can RapidDev help build a session management system for n8n AI agents?
Yes, RapidDev builds complete AI agent session systems in n8n including persistent memory, user profiling, preference learning, and multi-channel session continuity. Their team handles the database design, caching strategy, and edge cases that arise in production.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation