Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Persist User Session Data for an AI Agent Across Executions in n8n

AI agent sessions in n8n are stateless by default — each webhook trigger starts a fresh execution with no memory of previous interactions. Persist user session data by using the Postgres Chat Memory node for conversation history, the static data feature for lightweight per-workflow state, and a dedicated PostgreSQL session table for structured user metadata like preferences, names, and interaction counts.

What you'll learn

  • How to use Postgres Chat Memory for persistent conversation history
  • How to use n8n's static data feature for lightweight per-workflow state
  • How to build a structured session table for user metadata and preferences
  • How to combine all three approaches into a complete session management system
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced9 min read35-45 minutesn8n 1.30+, PostgreSQL, AI Agent node, Postgres Chat Memory node, Redis Chat Memory node (optional)March 2026RapidDev Engineering Team
TL;DR

AI agent sessions in n8n are stateless by default — each webhook trigger starts a fresh execution with no memory of previous interactions. Persist user session data by using the Postgres Chat Memory node for conversation history, the static data feature for lightweight per-workflow state, and a dedicated PostgreSQL session table for structured user metadata like preferences, names, and interaction counts.

Why AI Agent Sessions Lose State in n8n

Every n8n workflow execution is independent — when a webhook fires, it starts a completely new execution with no access to previous executions' data. For AI agents, this means the agent forgets the user's name, preferences, conversation history, and any custom context between interactions. n8n provides several mechanisms to bridge this gap: memory nodes for conversation history, static data for simple key-value storage, and database nodes for structured session data. This tutorial shows how to combine these mechanisms into a complete session persistence layer for AI agents.

Prerequisites

  • A running n8n instance (self-hosted or cloud) on version 1.30 or later
  • A PostgreSQL database accessible from n8n
  • An AI Agent workflow with a Webhook or Chat Trigger
  • An LLM credential (OpenAI, Anthropic, or Mistral)
  • Basic understanding of SQL and n8n Code nodes

Step-by-step guide

1

Design the session data model

Before building, plan what data needs to persist across executions. There are three categories: (1) Conversation history — the actual messages exchanged between user and agent, handled by memory nodes. (2) User metadata — name, preferences, language, timezone, stored in a database table. (3) Workflow state — counters, flags, last-seen timestamps, stored in static data or database. Each category uses a different persistence mechanism because they have different access patterns and lifetimes.

Expected result: A clear plan for what data goes in memory nodes (conversation), database tables (user metadata), and static data (workflow state).

2

Create the session database table

Create a PostgreSQL table to store structured user session data. This table holds metadata that the AI agent needs across conversations — things like the user's name, preferences, interaction count, and custom fields. This is separate from conversation history (handled by the memory node) because it is structured data that you query and update, not a message log.

typescript
1-- PostgreSQL: Create session data table
2CREATE TABLE IF NOT EXISTS agent_sessions (
3 session_id VARCHAR(255) PRIMARY KEY,
4 user_name VARCHAR(255),
5 user_email VARCHAR(255),
6 language VARCHAR(10) DEFAULT 'en',
7 timezone VARCHAR(50),
8 preferences JSONB DEFAULT '{}',
9 interaction_count INTEGER DEFAULT 0,
10 last_topic TEXT,
11 custom_context TEXT,
12 first_seen TIMESTAMP DEFAULT NOW(),
13 last_seen TIMESTAMP DEFAULT NOW(),
14 updated_at TIMESTAMP DEFAULT NOW()
15);
16
17CREATE INDEX idx_sessions_last_seen ON agent_sessions(last_seen);

Expected result: The agent_sessions table exists in PostgreSQL, ready to store structured user data per session.

3

Load session data at the start of each execution

At the beginning of your workflow (right after the Webhook or Chat Trigger), add a Postgres node that queries the session table for the current user's session ID. If the session exists, load the user's name, preferences, and other metadata. If not, create a new session record. This data is then available to all downstream nodes including the AI Agent's system prompt.

typescript
1// Postgres node — Query: Load or create session
2// Operation: Execute Query
3// Query:
4INSERT INTO agent_sessions (session_id, last_seen, interaction_count)
5VALUES ('{{ $json.sessionId }}', NOW(), 1)
6ON CONFLICT (session_id)
7DO UPDATE SET
8 last_seen = NOW(),
9 interaction_count = agent_sessions.interaction_count + 1
10RETURNING *;

Expected result: The session record is loaded (or created) and available as $json with all user metadata fields.

4

Inject session context into the AI Agent's system prompt

Use the loaded session data to personalize the AI agent's behavior. In the AI Agent node's system message (or in a Set node that feeds the system prompt), include the user's name, preferences, and any custom context. This gives the agent continuity even though it is a fresh execution — the agent 'remembers' the user by reading from the database.

typescript
1// Code node — JavaScript
2// Build personalized system prompt from session data
3
4const session = $('Load Session').first().json;
5
6let systemPrompt = 'You are a helpful AI assistant.';
7
8if (session.user_name) {
9 systemPrompt += ` The user's name is ${session.user_name}.`;
10}
11
12if (session.language && session.language !== 'en') {
13 systemPrompt += ` Respond in ${session.language}.`;
14}
15
16if (session.last_topic) {
17 systemPrompt += ` In the last conversation, you were discussing: ${session.last_topic}.`;
18}
19
20if (session.preferences && Object.keys(session.preferences).length > 0) {
21 systemPrompt += ` User preferences: ${JSON.stringify(session.preferences)}.`;
22}
23
24if (session.interaction_count > 1) {
25 systemPrompt += ` This is interaction #${session.interaction_count} with this user.`;
26}
27
28return [{ json: { systemPrompt, sessionId: session.session_id } }];

Expected result: The system prompt includes personalized context from the session database, giving the agent continuity across executions.

5

Connect the Postgres Chat Memory node for conversation history

Add the Postgres Chat Memory node and connect it to the AI Agent node's memory input. Set the Session ID Key to the same session ID used in the session table ({{ $json.sessionId }}). Set the Context Window Length to 30 messages. This node automatically stores and retrieves conversation messages, providing the AI agent with full conversational context. The combination of structured session data (from the table) and conversation history (from the memory node) gives the agent complete user context.

Expected result: The AI Agent has both structured session data in its system prompt and full conversation history from the memory node.

6

Update session data after each interaction

After the AI agent responds, update the session table with any new information learned during the interaction. Use a Code node to extract relevant metadata from the conversation (like the user's name if they mentioned it, or the topic they discussed) and update the session record. This ensures that the next execution has the latest context.

typescript
1// Code node — JavaScript
2// Extract and update session metadata after AI response
3
4const agentResponse = $input.first().json;
5const sessionId = $('Build System Prompt').first().json.sessionId;
6const userMessage = $('Webhook').first().json.message || '';
7const responseText = agentResponse.output || agentResponse.text || '';
8
9// Simple topic extraction (last 50 chars of user message)
10const lastTopic = userMessage.substring(0, 200);
11
12// Check if user mentioned their name
13const nameMatch = userMessage.match(/(?:my name is|I'm|I am)\s+([A-Z][a-z]+)/i);
14const detectedName = nameMatch ? nameMatch[1] : null;
15
16const updateFields = {
17 last_topic: lastTopic,
18 sessionId
19};
20
21if (detectedName) {
22 updateFields.user_name = detectedName;
23}
24
25return [{ json: updateFields }];

Expected result: The session table is updated with the conversation topic and any detected user metadata after each interaction.

7

Use static data for lightweight workflow state

For simple counters and flags that apply to the entire workflow (not per-user), use n8n's static data feature. Static data persists across executions within the same workflow without needing a database. Access it with $getWorkflowStaticData('global') in Code nodes. Use it for things like total request counters, last-error timestamps, or feature flags. Do not use it for per-user data — it is shared across all executions of the workflow.

typescript
1// Code node — JavaScript
2// Use static data for workflow-level state
3
4const staticData = $getWorkflowStaticData('global');
5
6// Initialize counters on first run
7if (!staticData.totalRequests) {
8 staticData.totalRequests = 0;
9 staticData.lastErrorTime = null;
10}
11
12staticData.totalRequests += 1;
13staticData.lastExecution = new Date().toISOString();
14
15// Check if we should rate limit
16const shouldThrottle = staticData.totalRequests > 1000;
17
18return [{
19 json: {
20 totalRequests: staticData.totalRequests,
21 shouldThrottle,
22 lastExecution: staticData.lastExecution
23 }
24}];

Expected result: Workflow-level state (counters, flags, timestamps) persists across executions without database queries.

Complete working example

session-manager.js
1// Complete Code node: Session manager for AI agents
2// Place at workflow start, after Webhook/Chat Trigger
3
4const items = $input.all();
5const input = items[0].json;
6
7// Get or generate session ID
8const sessionId = input.sessionId
9 || input.userId
10 || input.headers?.['x-session-id']
11 || `anon_${Date.now()}`;
12
13// Get workflow-level static data
14const staticData = $getWorkflowStaticData('global');
15if (!staticData.activeSessions) {
16 staticData.activeSessions = {};
17}
18
19// Track active sessions in static data (lightweight)
20const now = Date.now();
21staticData.activeSessions[sessionId] = now;
22
23// Clean up stale sessions (older than 24 hours)
24for (const [sid, timestamp] of Object.entries(staticData.activeSessions)) {
25 if (now - timestamp > 86400000) {
26 delete staticData.activeSessions[sid];
27 }
28}
29
30// Prepare session load query
31const sessionQuery = `
32 INSERT INTO agent_sessions (session_id, last_seen, interaction_count)
33 VALUES ('${sessionId}', NOW(), 1)
34 ON CONFLICT (session_id)
35 DO UPDATE SET
36 last_seen = NOW(),
37 interaction_count = agent_sessions.interaction_count + 1
38 RETURNING *;
39`;
40
41return [{
42 json: {
43 sessionId,
44 userMessage: input.message || input.text || input.chatInput || '',
45 sessionQuery,
46 activeSessionCount: Object.keys(staticData.activeSessions).length,
47 isReturningUser: staticData.activeSessions[sessionId] !== now
48 }
49}];

Common mistakes when persisting User Session Data for an AI Agent Across Executions in

Why it's a problem: Using static data to store per-user session data — it is shared across all users

How to avoid: Use a database table (PostgreSQL) for per-user data. Static data is global to the workflow, not per-session.

Why it's a problem: Not generating consistent session IDs, causing the agent to forget users between requests

How to avoid: Require a session ID in the webhook payload or generate one from a stable user identifier (email, auth token)

Why it's a problem: Storing everything in the conversation history instead of structured session data

How to avoid: Conversation history is for messages. User metadata (name, preferences) belongs in a database table for reliable retrieval.

Why it's a problem: Not updating the session table after each interaction, losing newly learned context

How to avoid: Add a session update step after the AI agent responds to capture topic, detected entities, and updated preferences

Why it's a problem: Using Simple Memory node in production, losing all conversation history on n8n restart

How to avoid: Use Postgres Chat Memory or Redis Chat Memory for production deployments

Best practices

  • Use three persistence layers: memory nodes for conversation, database for user metadata, static data for workflow state
  • Always use the same session ID across memory nodes and database queries for consistency
  • Use PostgreSQL UPSERT (ON CONFLICT) for atomic session creation and updates
  • Clean up stale sessions periodically to prevent database bloat
  • Inject session data into the system prompt so the AI agent has context without manual retrieval
  • Store structured preferences as JSONB in PostgreSQL for flexible schema evolution
  • Use static data only for workflow-level (not user-level) state like counters and flags
  • Set a reasonable Context Window Length (30-50 messages) on the memory node to balance context and token usage

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n AI agent chatbot forgets user context between webhook calls. How do I persist conversation history and user metadata (name, preferences) across executions using PostgreSQL and memory nodes?

n8n Prompt

How do I use the Postgres Chat Memory node together with a custom session table to give my n8n AI Agent persistent memory and user-specific context across webhook executions?

Frequently asked questions

What is the difference between the Postgres Chat Memory node and a custom session table?

The Postgres Chat Memory node stores conversation messages (the actual chat). A custom session table stores structured user metadata (name, preferences, interaction count). Use both together for a complete session persistence layer.

Can I use Redis instead of PostgreSQL for session data?

Yes, n8n has a Redis Chat Memory node for conversation history. For structured session data, use the Redis node with GET/SET operations. Redis is faster but has no built-in query language for complex aggregations.

How do I clean up old sessions to prevent database bloat?

Create a scheduled workflow that runs weekly and deletes sessions with last_seen older than 30 days: DELETE FROM agent_sessions WHERE last_seen < NOW() - INTERVAL '30 days'.

Does n8n's static data survive n8n restarts?

Yes, static data is persisted to n8n's database (not just memory). It survives restarts. However, it is not designed for large volumes of data — use it for simple counters and flags only.

How do I handle concurrent requests from the same user?

PostgreSQL's UPSERT with ON CONFLICT handles concurrent writes safely. For conversation history, the memory node manages concurrent access. Avoid using static data for per-user state as it has no concurrency control.

Can RapidDev help build a session management system for n8n AI agents?

Yes, RapidDev builds complete AI agent session systems in n8n including persistent memory, user profiling, preference learning, and multi-channel session continuity. Their team handles the database design, caching strategy, and edge cases that arise in production.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.