Scaling n8n with Queue Mode — A Complete Deployment Guide

When your n8n workflows start to grow—more triggers, heavier jobs, and higher execution volumes—the default single-process mode can quickly become a bottleneck. Long-running workflows may slow down the editor, block new executions, or even crash under heavy load.

That’s where Queue Mode comes in. Instead of letting one container handle everything, n8n distributes workflow executions to dedicated worker processes. These workers pull tasks from a central Redis queue, run them independently, and write results back to PostgreSQL. Meanwhile, the main n8n instance stays responsive for editing, scheduling, and webhook handling.

In this guide, you’ll learn:

  • ✅ What Queue Mode is and why it matters for production setups.
  • ✅ How the architecture works with Docker Compose, Traefik, Redis, and PostgreSQL.
  • ✅ Step-by-step configuration to deploy Queue Mode on Ubuntu.
  • ✅ How to verify health checks, scale workers, and troubleshoot common issues.
  • ✅Best practices to keep your automation stack secure and reliable.

By the end, you’ll have a production-ready n8n setup that can scale smoothly as your automation needs grow.

Why Queue Mode?

Running n8n in its default mode works fine for small setups—but as soon as workflows get heavier, you’ll notice slowdowns. That’s because the main process handles everything: the editor UI, webhooks, and all workflow executions.

Queue Mode solves this by separating execution from the main instance:

  • Scalable Execution → Offload workflow processing to dedicated worker containers.
  • Responsive UI → Keep the n8n editor fast and stable, even under load.
  • Reliability → If one worker fails, others keep running—no downtime for the main service.
  • Flexible Deployment → Add or remove workers based on demand, scaling horizontally as your workload grows.

Think of it like a busy restaurant: instead of one person taking orders and cooking, the manager (main) handles orders while multiple chefs (workers) do the cooking.

Architecture Overview

In Queue Mode, n8n is split into multiple services that work together. Instead of one container doing everything, each component has a dedicated role:

  • n8n Main (UI/API) → Runs the editor, handles webhooks, schedules, and queues jobs into Redis.
  • Redis (Message Broker) → Manages the job queue and distributes tasks to available workers.
  • n8n Workers → Pull tasks from Redis, execute workflows, and save results.
  • PostgreSQL → Stores workflows, credentials, execution history, and logs.
  • Traefik (Reverse Proxy) → Routes traffic to n8n Main and manages HTTPS certificates via Let’s Encrypt.
n8n-queue-mode-architecture

Task Processing Flow (Queue Mode)

Understanding how jobs move through the system helps clarify why Queue Mode is so powerful. Here’s how it works:

Case 1: Single Worker

  • The main n8n instance enqueues workflow executions into Redis (Job Queue).
  • A single worker polls Redis, picks up tasks, runs them, and stores results in PostgreSQL.
  • Concurrency (e.g., N8N_WORKER_CONCURRENCY=5) controls how many tasks the worker can run at once.

Case 2: Multiple Workers

  • With 2+ workers, Redis distributes tasks between them (first-come, first-served).
  • This doubles or triples throughput (depending on the number of workers).
  • If one worker fails, others continue processing — improving resilience.

Why Use Queue Mode?

Running n8n in Queue Mode isn’t just about handling more jobs — it’s about making your automation stack stable, scalable, and production-ready. Here’s why:

1. Scalable Execution

  • Heavy workflows don’t block the main instance — instead, they’re distributed across multiple workers.
  • Add more workers anytime to process more jobs in parallel.

2. Responsive UI

  • The main service is dedicated to editor, API, and webhooks.
  • Even when dozens of workflows are running, the UI remains smooth and responsive.

3. Improved Reliability

  • If one worker crashes, others keep running — no single point of failure.
  • Redis ensures queued jobs aren’t lost; they’ll be picked up by the next available worker.

4. Flexible Scaling

  • Start small (1 worker, low concurrency).
  • Scale horizontally (add workers) or vertically (bigger servers) as workflows grow.
  • Works well for cloud VPS, Docker Swarm, or Kubernetes deployments.

5. Workload Isolation

  • UI, scheduling, and execution are separate processes.
  • Long-running or resource-intensive workflows don’t interfere with day-to-day operations.

Setting Up Queue Mode on Ubuntu with Docker

In this section, we’ll deploy n8n in Queue Mode on a single Ubuntu VPS using Docker Compose. The stack includes:

  • Traefik (reverse proxy) – HTTPS via Let’s Encrypt and clean routing to n8n.
  • PostgreSQL – persistent store for workflows, credentials, executions, and users.
  • Redis (Job Queue) – message broker that holds execution tasks for workers.
  • n8n Main – the UI/API, webhooks, and scheduler (no heavy execution here).
  • n8n Workers – one or more worker containers that pull jobs from Redis and run workflows.

Where we deploy:
For this guide, all services (main + workers + Redis + Postgres + Traefik) run on the same VPS for simplicity and cost efficiency. It’s a solid starting point for most small teams.

When to go beyond one VPS:
In higher‑load or mission‑critical environments, you can scale by:

  • Running multiple workers on additional VPS instances (pointing to the same Redis).
  • Moving Redis and PostgreSQL to managed services or dedicated servers for high availability.
  • Adding more workers or autoscaling (Kubernetes/Docker Swarm) as throughput needs grow.

This layout gives you a production‑ready baseline now, and a clear path to horizontal scaling later.

1. Recommended VPS Sizing & Worker Strategy

VPS (vCPU / RAM)Setup Suggestion
1 vCPU / 2 GB1 worker @ concurrency 3–5
2 vCPU / 4 GB1–2 workers @ concurrency 5
4 vCPU / 8 GB2 workers @ concurrency 8
8+ vCPU / 16+ GB3–4 workers @ concurrency 8–10

2. Prepare configuration

This setup requires two key files:

  • .env → Environment variables (domain, credentials, queue settings, Postgres, Redis, etc.)
  • docker-compose.yml → Defines services: Traefik, Postgres, Redis, n8n-main, and workers

Example Docker Compose file:

services:
  traefik:
    image: traefik:v2.11
    container_name: traefik
    restart: unless-stopped
    command:
      - "--api.dashboard=false"
      # EntryPoints
      - "--entrypoints.web.address=:80"
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
      - "--entrypoints.websecure.address=:443"
      # Providers
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      # ACME (production)
      - "--certificatesresolvers.le.acme.email=${SSL_EMAIL}"
      - "--certificatesresolvers.le.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.le.acme.tlschallenge=true"
      # Logs
      - "--log.level=INFO"
      - "--accesslog=true"
      # Health check
      - "--ping=true"
      - "--ping.entrypoint=traefikping"
      - "--entrypoints.traefikping.address=:8082"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - letsencrypt:/letsencrypt
    networks: [n8n-network]
    healthcheck:
      test: ["CMD", "wget", "--spider", "-q", "http://localhost:8082/ping"]
      interval: 10s
      timeout: 5s
      start_period: 10s
      retries: 5

  postgres:
    image: postgres:14
    container_name: postgres
    restart: unless-stopped
    env_file:
      - .env
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks: [n8n-network]
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      start_period: 10s
      retries: 5

  redis:
    image: redis:7
    container_name: redis
    restart: unless-stopped
    command: ["redis-server", "--requirepass", "${STRONG_PASSWORD}"]
    networks: [n8n-network]
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${STRONG_PASSWORD}", "ping"]
      interval: 10s
      timeout: 10s
      start_period: 10s
      retries: 5

  # Main (UI, schedules, webhooks)
  n8n-main:
    image: docker.n8n.io/n8nio/n8n:${N8N_IMAGE_TAG:-latest}
    container_name: n8n-main
    restart: unless-stopped
    env_file:
      - .env
    volumes:
      - n8n-data:/home/node/.n8n
      - ./local-files:/files
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "wget --spider -q http://localhost:5678/healthz || exit 1"]
      interval: 10s
      timeout: 5s
      start_period: 20s
      retries: 5
    labels:
      - "traefik.enable=true"
      # Router & TLS
      - "traefik.http.routers.n8n.rule=Host(`${DOMAIN}`)"
      - "traefik.http.routers.n8n.entrypoints=websecure"
      - "traefik.http.routers.n8n.tls=true"
      - "traefik.http.routers.n8n.tls.certresolver=le"
      # Explicitly bind router -> this service
      - "traefik.http.routers.n8n.service=n8n-main"
      # Tell Traefik the internal port for THIS service
      - "traefik.http.services.n8n-main.loadbalancer.server.port=5678"
      # Middlewares
      - "traefik.http.routers.n8n.middlewares=n8n-headers,n8n-rate,n8n-retry,n8n-compress"
      - "traefik.http.middlewares.n8n-headers.headers.STSSeconds=315360000"
      - "traefik.http.middlewares.n8n-headers.headers.browserXSSFilter=true"
      - "traefik.http.middlewares.n8n-headers.headers.contentTypeNosniff=true"
      - "traefik.http.middlewares.n8n-headers.headers.forceSTSHeader=true"
      - "traefik.http.middlewares.n8n-headers.headers.STSIncludeSubdomains=true"
      - "traefik.http.middlewares.n8n-headers.headers.STSPreload=true"
      - "traefik.http.middlewares.n8n-rate.ratelimit.average=100"
      - "traefik.http.middlewares.n8n-rate.ratelimit.burst=50"
      - "traefik.http.middlewares.n8n-rate.ratelimit.period=1s"
      - "traefik.http.middlewares.n8n-retry.retry.attempts=3"
      - "traefik.http.middlewares.n8n-compress.compress=true"
    networks: [n8n-network]

  # External Task Runner for n8n-main
  n8n-runner-main:
    image: docker.n8n.io/n8nio/n8n:${N8N_IMAGE_TAG}
    restart: unless-stopped
    env_file:
      - .env
    entrypoint: ["/usr/local/bin/task-runner-launcher"]
    command: ["javascript"]
    depends_on:
      n8n-main:
        condition: service_started
    networks: [n8n-network]

  # Worker(s) – scale horizontally
  n8n-worker:
    image: docker.n8n.io/n8nio/n8n:${N8N_IMAGE_TAG:-latest}
    restart: unless-stopped
    env_file:
      - .env
    command: ["worker", "--concurrency=${N8N_WORKER_CONCURRENCY}"]
    volumes:
      - n8n-data:/home/node/.n8n
      - ./local-files:/files
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks: [n8n-network]

networks:
  n8n-network:
    driver: bridge

volumes:
  n8n-data:
    external: true
  postgres-data:
    external: true
  redis-data:
    external: true
  letsencrypt:
    external: true

Copy the .env file from my GitHub .env and then update the values for your own DOMAIN, SSL_EMAIL, GENERIC_TIMEZONE, STRONG_PASSWORD, and N8N_ENCRYPTION_KEY.

# === USER SETTINGS ===
DOMAIN=yourdomain.com
SSL_EMAIL=you@example.com
# Timezone
GENERIC_TIMEZONE=Asia/Ho_Chi_Minh

# === SECRETS (USE STRONG VALUES!)
STRONG_PASSWORD=CHANGE_ME_BASE64_16_BYTES
N8N_ENCRYPTION_KEY=CHANGE_ME_BASE64_32_BYTES
RUNNERS_AUTH_TOKEN=${STRONG_PASSWORD}

# === N8N GENERAL CONFIG ===
N8N_IMAGE_TAG=latest
NODE_ENV=production
N8N_LOG_LEVEL=info
N8N_DIAGNOSTICS_ENABLED=false

# Host / Traefik integration
N8N_PORT=5678
N8N_PROTOCOL=https
N8N_HOST=${DOMAIN}
WEBHOOK_URL=https://${DOMAIN}
N8N_EDITOR_BASE_URL=https://${DOMAIN}
N8N_PUBLIC_API_BASE_URL=https://${DOMAIN}
N8N_SECURE_COOKIE=true

# Security (UI login)
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=${STRONG_PASSWORD}

# File permission hardening
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true

# === DATABASE (Postgres) ===
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=${STRONG_PASSWORD}

# For the PostgreSQL container itself
POSTGRES_USER=n8n
POSTGRES_PASSWORD=${STRONG_PASSWORD}
POSTGRES_DB=n8n

# === QUEUE MODE ===
EXECUTIONS_MODE=queue
# Redis (BullMQ queue)
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
QUEUE_BULL_REDIS_PASSWORD=${STRONG_PASSWORD}

# Workers
N8N_WORKER_CONCURRENCY=5

# Binary data storage (filesystem not supported in queue mode)
N8N_DEFAULT_BINARY_DATA_MODE=default

# === EXECUTION BEHAVIOR ===
EXECUTIONS_TIMEOUT=3600
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=336
EXECUTIONS_RETRY_MAX=3

# ===================================================================
# === OPTIONAL / ADVANCED ===
# ===================================================================
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
QUEUE_HEALTH_CHECK_ACTIVE=true

# ===================================================================
# ===  EXTERNALTASK RUNNER ===
# ===================================================================
N8N_RUNNERS_ENABLED=true
N8N_RUNNERS_MODE=external
N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
N8N_RUNNERS_MAX_CONCURRENCY=5
N8N_RUNNERS_AUTH_TOKEN=${RUNNERS_AUTH_TOKEN}

3. Launch the n8n stack

# Navigate to project directory
cd /home/n8n

# Create a directory called local-files for sharing files between the n8n instance and the host system
mkdir -p ./local-files
# Let your host user own the folder; n8n runs as user 1000 in the container
chown -R ${SUDO_USER:-$USER}:${SUDO_USER:-$USER} ./local-files
chmod 755 ./local-files

# Validate YAML & env expansion first
docker compose config

# Pull images (optional but recommended)
docker compose pull

# Manual create volume
for v in n8n-data postgres-data redis-data letsencrypt; do docker volume create "$v"; done
docker volume ls | grep -E 'n8n-data|postgres-data|redis-data|letsencrypt'

# Start everything (Traefik, Postgres, Redis, n8n-main, 1 worker)
docker compose up -d

# Scale to 2 workers
docker compose up -d --scale n8n-worker=2

4. Health check

Run these commands after deployment to verify everything is working:

# 1. Check container health status
docker ps --format "table {{.Names}}\t{{.Status}}"

# NAMES                   STATUS
# n8n-n8n-runner-main-1   Up About a minute
# n8n-main                Up About a minute (healthy)
# n8n-n8n-worker-1        Up About a minute
# n8n-n8n-worker-2        Up About a minute
# traefik                 Up About a minute (healthy)
# postgres                Up About a minute (healthy)
# redis                   Up About a minute (healthy)

# 2. Checl status of n8n-main
docker exec -it n8n-main sh -lc 'wget --spider -q http://127.0.0.1:5678/healthz && echo "n8n-main OK" || echo FAIL'
# Should return "n8n-main OK"

# 3. Queue mode confirmation:
docker exec -it n8n-main printenv | grep EXECUTIONS_MODE
#Should show:
# EXECUTIONS_MODE=queue

# 4. Redis (queue backend)
export QUEUE_BULL_REDIS_PASSWORD=PASTE_16B       # output of command openssl rand -base64 16 
docker compose exec redis redis-cli -a "$QUEUE_BULL_REDIS_PASSWORD" ping

#Should return:
#PONG

# 5. Postgres (database):
# Test DB from postgres (verify POSTGRES_PASSWORD)
docker compose exec postgres bash -lc 'PGPASSWORD="$POSTGRES_PASSWORD" psql -h 127.0.0.1 -U n8n -d n8n -c "select 1"'

docker compose exec postgres psql -U n8n -d n8n -c "\dt"
# Should return a list of tables. If empty, that’s fine on first boot — tables will appear after you create workflows.

# Streams logs from all scaled worker containers in one view (very useful to see load balancing in action).
docker compose logs -f n8n-worker

# Check n8n-main container logs
docker compose logs -f n8n-main

# Check redis container logs
docker logs -f redis

# Check Postgres container logs
docker logs -f postgres

Best Practices, Monitoring, and Scaling

✅ Best Practices for n8n Development

  • Use PostgreSQL → reliable, supports scaling. Avoid SQLite in production.
  • Secure with HTTPS & firewall → run behind Traefik, enable Basic Auth, block unused ports.
  • Protect secrets → set a fixed N8N_ENCRYPTION_KEY, store API keys in env vars.
  • Automate backups → DB, .n8n folder, SSL certs, configs. Sync offsite (e.g., Google Drive).
  • Monitor health → use docker stats, health check endpoint, or Prometheus/Grafana.
  • Scale smart → add more workers instead of overloading one with high concurrency.
  • Prune executions → prevent DB bloat with EXECUTIONS_DATA_PRUNE=true.
  • Stay updated → keep n8n and Docker images current; test upgrades first.

📊 Monitor and Scale

Server Metrics

  • Use htop or your VPS dashboard for CPU/RAM usage.
  • Run docker stats to check container resource consumption.

Redis Monitoring

  • Run redis-cli info memory or redis-cli info stats for queue stats.
  • Optionally, use Redis Commander for a visual dashboard.

Database Monitoring

  • Track PostgreSQL growth and execution times.
  • Watch for failed or slow workflows.

Application Monitoring

  • In n8n UI → Settings → Executions for active & past jobs.
  • For production-grade monitoring, integrate Prometheus + Grafana (track queue size, worker load, execution latency).

🚀 Scaling Queue Mode for Larger Deployments

  • Run multiple workers against the same Redis for higher throughput.
  • Use Redis clustering or a managed Redis (AWS ElastiCache, Azure Cache) for HA setups.
  • Scale PostgreSQL vertically (more CPU/RAM) or switch to a managed database.
  • Separate workflows into different queues (critical vs batch).
  • Use Kubernetes or Docker Swarm to auto-scale workers dynamically.

Redis and RabbitMQ in n8n Queue Mode

n8n can leverage both Redis and RabbitMQ, but they serve very different purposes in the ecosystem. Understanding this distinction is key when scaling n8n or connecting it to other event-driven systems.

Redis in n8n Queue Mode

Redis is the primary backend for n8n’s internal job queue when you run in Queue Mode.

  • Configuration
    The main n8n instance then uses BullMQ (a Redis-based queue library) to manage jobs.
  • Job Enqueuing
    When a workflow is triggered, the execution is packaged as a job and pushed into Redis.
  • Worker Processing
    Separate worker containers (or processes) pull jobs from Redis and run the workflows asynchronously. This separation of “dispatcher” (the main n8n) and “executor” (the workers) allows:
    • Horizontal scaling (add more workers to handle load).
    • Resilience (if one worker crashes, others keep running).
    • Faster editor response times (main instance isn’t blocked by heavy jobs).

👉 In short: Redis = n8n’s internal job queue for scaling and stability.

RabbitMQ in n8n Workflows

Unlike Redis, RabbitMQ is not used by n8n internally. Instead, it’s an integration point for event-driven architectures, letting n8n interact with external systems.

  • RabbitMQ Trigger Node
    Starts a workflow whenever a new message arrives in a RabbitMQ queue. This lets n8n react to external events.
  • RabbitMQ Node (Publish/Consume)
    Allows your workflows to publish new messages or consume existing ones from RabbitMQ. Perfect for chaining n8n automations with other microservices or applications.

Example Scenario

  • A logistics system publishes shipping updates into a RabbitMQ queue.
  • An n8n workflow, listening with a RabbitMQ Trigger, processes each update.
  • That workflow pushes heavier tasks into Redis for worker execution (e.g., database updates, report generation).
  • Another n8n workflow publishes processed results back to a different RabbitMQ queue, where a warehouse system consumes them.

Key Takeaway

  • Redis → powers n8n’s internal job queue in Queue Mode.
  • RabbitMQ → connects n8n to external event streams and services.

Think of Redis as the engine inside n8n that makes scaling possible, while RabbitMQ is the bridge that lets n8n talk with the rest of your ecosystem.

Conclusion

Scaling n8n with Queue Mode transforms your automation platform from a single-server setup into a scalable, reliable system. By separating the UI from workflow execution and distributing jobs across workers via Redis, you gain:

  • Performance — responsive editor even under load.
  • 🔒 Reliability — one worker failure won’t block everything.
  • 📈 Scalability — add workers as your business grows.

With this guide, you now have the foundation to deploy, scale, and maintain n8n in production with confidence.

Similar Posts

  • Self-Hosting n8n Made Easy: The Complete Guide to Install, Upgrade, Backup & Restore

    If you already know about n8n and want to deploy it on your own server (like a VPS) for greater flexibility, privacy, and scalability, this guide is for you. Maybe you’ve asked yourself: “What’s the best way to install n8n for production on Ubuntu?” — Docker, Traefik, PostgreSQL, SQLite, Nginx… it can get confusing fast….

Leave a Reply