Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Enable Queue Mode in n8n

Enable queue mode in n8n to scale workflow execution across multiple worker processes. Set EXECUTIONS_MODE=queue, configure Redis as the message broker, use PostgreSQL as the database, and start separate main and worker instances. Queue mode prevents the main instance from being blocked by long-running workflows and lets you scale workers independently.

What you'll learn

  • How to configure n8n for queue mode with the required environment variables
  • How to set up Redis as the message broker
  • How to run separate main and worker instances
  • How to set concurrency limits for workers
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate7 min read30-40 minutesn8n 1.0+, PostgreSQL 12+, Redis 6+March 2026RapidDev Engineering Team
TL;DR

Enable queue mode in n8n to scale workflow execution across multiple worker processes. Set EXECUTIONS_MODE=queue, configure Redis as the message broker, use PostgreSQL as the database, and start separate main and worker instances. Queue mode prevents the main instance from being blocked by long-running workflows and lets you scale workers independently.

What Queue Mode Does and When to Use It

By default, n8n runs in 'regular' mode where the main process handles both the UI/API and workflow execution. This works for low-volume setups, but becomes a bottleneck when running many workflows or workflows with long-running nodes. Queue mode separates these concerns: the main instance handles the UI, API, webhooks, and scheduling, while worker instances pull and execute workflows from a Redis-backed queue. This architecture lets you scale workers independently, prevents the UI from becoming unresponsive during heavy execution, and enables horizontal scaling.

Prerequisites

  • A running n8n instance with PostgreSQL as the database (SQLite is not supported for queue mode)
  • Redis 6+ server accessible from both main and worker instances
  • Docker or npm installation of n8n
  • Basic understanding of n8n environment variables

Step-by-step guide

1

Ensure PostgreSQL is configured as n8n's database

Queue mode requires PostgreSQL — it does not work with SQLite. If you are still using SQLite, migrate to PostgreSQL first. Verify that your DB_TYPE is set to postgresdb and the DB_POSTGRESDB_* variables point to a working PostgreSQL instance. Both the main instance and all workers must connect to the same PostgreSQL database.

typescript
1# Verify these environment variables are set
2echo $DB_TYPE # Should be: postgresdb
3echo $DB_POSTGRESDB_HOST
4echo $DB_POSTGRESDB_DATABASE

Expected result: DB_TYPE is postgresdb and n8n connects to PostgreSQL successfully.

2

Set up Redis for the execution queue

Install and start Redis. The simplest approach is Docker. Redis acts as the message broker between the main instance and workers. When a workflow needs to execute, the main instance pushes a job to Redis, and the first available worker picks it up. Use a dedicated Redis instance for n8n to avoid interference with other applications.

typescript
1# Run Redis with Docker
2docker run -d --name n8n-redis \
3 -p 6379:6379 \
4 redis:7-alpine
5
6# Verify Redis is running
7docker exec n8n-redis redis-cli ping
8# Expected output: PONG

Expected result: Redis is running and responds to PING with PONG.

3

Configure environment variables for queue mode

Set the required environment variables on both the main instance and all workers. EXECUTIONS_MODE=queue enables queue mode. QUEUE_BULL_REDIS_HOST and QUEUE_BULL_REDIS_PORT point to your Redis instance. All instances must share the same N8N_ENCRYPTION_KEY and database configuration. The main instance and workers use the same environment variables — they differ only in how they are started.

typescript
1# Queue mode configuration (set on ALL instances)
2export EXECUTIONS_MODE=queue
3export QUEUE_BULL_REDIS_HOST=localhost
4export QUEUE_BULL_REDIS_PORT=6379
5# export QUEUE_BULL_REDIS_PASSWORD=your_redis_password # if auth is enabled
6
7# Database (must be PostgreSQL)
8export DB_TYPE=postgresdb
9export DB_POSTGRESDB_HOST=localhost
10export DB_POSTGRESDB_PORT=5432
11export DB_POSTGRESDB_DATABASE=n8n_db
12export DB_POSTGRESDB_USER=n8n_user
13export DB_POSTGRESDB_PASSWORD=your_secure_password
14
15# Shared encryption key (MUST be identical on all instances)
16export N8N_ENCRYPTION_KEY=your_encryption_key_here
17
18# Webhook URL (main instance URL)
19export WEBHOOK_URL=https://n8n.yourdomain.com

Expected result: All environment variables are set and consistent across main and worker instances.

4

Start the main instance

Start n8n normally. In queue mode, the main instance handles the editor UI, REST API, webhooks, and scheduling. It pushes execution jobs to Redis but does not execute workflows itself (unless a worker is not available). The main instance must be started before workers.

typescript
1# Start the main instance
2n8n start
3
4# Or with Docker
5docker run -d --name n8n-main \
6 -p 5678:5678 \
7 -e EXECUTIONS_MODE=queue \
8 -e QUEUE_BULL_REDIS_HOST=n8n-redis \
9 -e QUEUE_BULL_REDIS_PORT=6379 \
10 -e DB_TYPE=postgresdb \
11 -e DB_POSTGRESDB_HOST=postgres \
12 -e DB_POSTGRESDB_DATABASE=n8n_db \
13 -e DB_POSTGRESDB_USER=n8n_user \
14 -e DB_POSTGRESDB_PASSWORD=your_secure_password \
15 -e N8N_ENCRYPTION_KEY=your_encryption_key_here \
16 -e WEBHOOK_URL=https://n8n.yourdomain.com \
17 docker.n8n.io/n8nio/n8n

Expected result: The main instance starts, connects to PostgreSQL and Redis, and serves the editor at port 5678.

5

Start one or more worker instances

Start worker instances using the n8n worker command. Workers connect to the same PostgreSQL database and Redis instance. Each worker pulls jobs from the queue and executes them. Start as many workers as your server can handle. Each worker can execute one workflow at a time by default (configurable with N8N_CONCURRENCY_PRODUCTION_LIMIT).

typescript
1# Start a worker (npm installation)
2n8n worker
3
4# Start a worker with concurrency limit
5N8N_CONCURRENCY_PRODUCTION_LIMIT=5 n8n worker
6
7# Or with Docker
8docker run -d --name n8n-worker-1 \
9 -e EXECUTIONS_MODE=queue \
10 -e QUEUE_BULL_REDIS_HOST=n8n-redis \
11 -e QUEUE_BULL_REDIS_PORT=6379 \
12 -e DB_TYPE=postgresdb \
13 -e DB_POSTGRESDB_HOST=postgres \
14 -e DB_POSTGRESDB_DATABASE=n8n_db \
15 -e DB_POSTGRESDB_USER=n8n_user \
16 -e DB_POSTGRESDB_PASSWORD=your_secure_password \
17 -e N8N_ENCRYPTION_KEY=your_encryption_key_here \
18 -e N8N_CONCURRENCY_PRODUCTION_LIMIT=5 \
19 docker.n8n.io/n8nio/n8n worker

Expected result: The worker connects to Redis and PostgreSQL, then waits for jobs. The console shows 'Worker is ready to process executions'.

6

Verify queue mode is working

Open the n8n editor and trigger a workflow manually or via webhook. Check the execution history to verify the execution was processed by a worker. The execution details show which instance processed the job. Also verify that the UI remains responsive during execution — this confirms the main instance is offloading work to workers.

Expected result: Workflow executions are processed by workers and the main instance UI remains responsive during heavy workloads.

Complete working example

docker-compose-queue-mode.yml
1version: '3.8'
2
3services:
4 postgres:
5 image: postgres:16-alpine
6 restart: always
7 environment:
8 POSTGRES_DB: n8n_db
9 POSTGRES_USER: n8n_user
10 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
11 volumes:
12 - postgres_data:/var/lib/postgresql/data
13 healthcheck:
14 test: ['CMD-SHELL', 'pg_isready -U n8n_user -d n8n_db']
15 interval: 5s
16 timeout: 5s
17 retries: 5
18
19 redis:
20 image: redis:7-alpine
21 restart: always
22 volumes:
23 - redis_data:/data
24 healthcheck:
25 test: ['CMD', 'redis-cli', 'ping']
26 interval: 5s
27 timeout: 5s
28 retries: 5
29
30 n8n-main:
31 image: docker.n8n.io/n8nio/n8n
32 restart: always
33 ports:
34 - '5678:5678'
35 environment:
36 - EXECUTIONS_MODE=queue
37 - QUEUE_BULL_REDIS_HOST=redis
38 - QUEUE_BULL_REDIS_PORT=6379
39 - DB_TYPE=postgresdb
40 - DB_POSTGRESDB_HOST=postgres
41 - DB_POSTGRESDB_PORT=5432
42 - DB_POSTGRESDB_DATABASE=n8n_db
43 - DB_POSTGRESDB_USER=n8n_user
44 - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
45 - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
46 - WEBHOOK_URL=${WEBHOOK_URL}
47 volumes:
48 - n8n_data:/home/node/.n8n
49 depends_on:
50 postgres:
51 condition: service_healthy
52 redis:
53 condition: service_healthy
54
55 n8n-worker:
56 image: docker.n8n.io/n8nio/n8n
57 restart: always
58 command: worker
59 deploy:
60 replicas: 2
61 environment:
62 - EXECUTIONS_MODE=queue
63 - QUEUE_BULL_REDIS_HOST=redis
64 - QUEUE_BULL_REDIS_PORT=6379
65 - DB_TYPE=postgresdb
66 - DB_POSTGRESDB_HOST=postgres
67 - DB_POSTGRESDB_PORT=5432
68 - DB_POSTGRESDB_DATABASE=n8n_db
69 - DB_POSTGRESDB_USER=n8n_user
70 - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
71 - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
72 - N8N_CONCURRENCY_PRODUCTION_LIMIT=5
73 depends_on:
74 postgres:
75 condition: service_healthy
76 redis:
77 condition: service_healthy
78
79volumes:
80 postgres_data:
81 redis_data:
82 n8n_data:

Common mistakes when enabling Queue Mode in n8n

Why it's a problem: Trying to use queue mode with SQLite

How to avoid: Queue mode requires PostgreSQL. Migrate to PostgreSQL by setting DB_TYPE=postgresdb and the DB_POSTGRESDB_* variables.

Why it's a problem: Using different N8N_ENCRYPTION_KEY values on main and worker instances

How to avoid: All instances must share the exact same encryption key. Generate one with openssl rand -hex 32 and use it everywhere.

Why it's a problem: Starting workers before the main instance on first setup

How to avoid: Start the main instance first so it creates the database schema. Workers can be started afterward.

Why it's a problem: Not setting WEBHOOK_URL on the main instance, causing incorrect webhook URLs

How to avoid: Set WEBHOOK_URL to your public URL so n8n generates correct webhook paths. Workers do not need this variable.

Best practices

  • Always use PostgreSQL with queue mode — SQLite does not support concurrent access from multiple processes
  • Use the same N8N_ENCRYPTION_KEY on all instances — mismatched keys cause credential decryption failures
  • Start with 2-3 workers and scale based on queue depth monitoring
  • Set N8N_CONCURRENCY_PRODUCTION_LIMIT to 3-5 per worker to allow parallel execution within each worker
  • Use Redis persistence (AOF or RDB snapshots) so pending jobs survive a Redis restart
  • Monitor Redis memory usage — large workflow payloads in the queue can consume significant memory
  • Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 to auto-prune old execution data
  • Use Docker Compose replicas to scale workers easily: deploy.replicas: N

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

How do I enable queue mode in n8n so workflow executions are processed by separate worker instances? I need the full setup including Redis, PostgreSQL, and Docker Compose configuration.

n8n Prompt

Create a docker-compose.yml for n8n in queue mode with PostgreSQL, Redis, one main instance, and two worker instances. Include health checks and all required EXECUTIONS_MODE and QUEUE_BULL_REDIS_* environment variables.

Frequently asked questions

Does queue mode require a paid n8n license?

No. Queue mode is available in the open-source community edition of n8n. It is a configuration option, not a paid feature.

Can I run the main instance and workers on the same server?

Yes. For moderate workloads, running everything on one server is fine. For high-volume production, run workers on separate servers for better resource isolation.

How many workers should I run?

Start with 2-3 workers with N8N_CONCURRENCY_PRODUCTION_LIMIT=5 each. Monitor queue depth via Redis or n8n metrics. Add more workers if jobs wait longer than acceptable.

What happens if a worker crashes during execution?

The job remains in Redis. When the worker reconnects or another worker picks it up, the execution is retried based on n8n's retry settings. Data is not lost because execution state is in PostgreSQL.

Can I use Redis Cluster or Redis Sentinel with n8n queue mode?

n8n uses Bull queue library, which supports Redis Sentinel for high availability. Set the QUEUE_BULL_REDIS_* variables to point to your Sentinel setup. Redis Cluster is not supported by Bull.

Can RapidDev help me scale n8n with queue mode for production?

Yes. RapidDev can architect and deploy a production n8n setup with queue mode, including Redis configuration, worker scaling, monitoring with Prometheus/Grafana, and high-availability patterns.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.