Handle large JSON payloads in n8n by using the SplitInBatches node to process items in smaller groups, the Edit Fields node to strip unnecessary data before passing it downstream, and environment variables to increase the Node.js memory limit. These techniques prevent memory crashes and execution timeouts on large datasets.
Why Large JSON Causes Problems in n8n
n8n processes data in memory. When a workflow handles a large JSON payload — thousands of items from an API, a big database query result, or a large file — the Node.js process can run out of memory and crash. Even before crashing, large payloads slow down the editor UI, cause execution timeouts, and make debugging difficult because the execution data is too large to display. The key strategies are: break large datasets into batches, reduce payload size by removing unnecessary fields, and increase the memory limit when needed.
Prerequisites
- A running n8n instance
- A workflow that processes large JSON data (hundreds or thousands of items)
- Terminal access for memory configuration (self-hosted only)
Step-by-step guide
Use SplitInBatches to process items in groups
Use SplitInBatches to process items in groups
The SplitInBatches node divides a large array of items into smaller batches and processes each batch one at a time. This prevents n8n from loading the entire dataset into memory at once. Add a SplitInBatches node after the node that produces the large dataset, set the batch size (e.g., 50 or 100 items), then connect your processing nodes to the SplitInBatches output. The SplitInBatches node loops automatically until all items are processed.
Expected result: The workflow processes items in batches of 50 instead of all at once. Memory usage stays stable throughout the execution.
Reduce payload size with Edit Fields
Reduce payload size with Edit Fields
Large API responses often include fields you do not need. Use the Edit Fields (formerly Set) node to keep only the fields required by downstream nodes. This dramatically reduces memory usage when dealing with API responses that return hundreds of fields per item. Place the Edit Fields node right after the data source node, before any heavy processing.
1// Example: API returns 50 fields per item, but you only need 32// In the Edit Fields node, set mode to 'Manual Mapping' and add:3// name → {{ $json.name }}4// email → {{ $json.email }}5// id → {{ $json.id }}6// Enable 'Keep Only Set' to discard all other fields78// Alternatively, use a Code node:9const items = $input.all();10return items.map(item => ({11 json: {12 id: item.json.id,13 name: item.json.name,14 email: item.json.email15 }16}));Expected result: Each item is reduced from dozens of fields to just the fields you need. The total payload size drops significantly.
Increase the Node.js memory limit
Increase the Node.js memory limit
By default, Node.js limits memory to about 1.5 GB. For workflows that genuinely need to handle large datasets in memory, you can increase this limit. Set the NODE_OPTIONS environment variable before starting n8n. For Docker, add it to your compose file or run command.
1# Increase memory limit to 4 GB2export NODE_OPTIONS="--max-old-space-size=4096"3n8n start45# Docker run6docker run -d \7 --name n8n \8 -e NODE_OPTIONS="--max-old-space-size=4096" \9 -p 5678:5678 \10 docker.n8n.io/n8nio/n8n1112# docker-compose.yml13# environment:14# - NODE_OPTIONS=--max-old-space-size=4096Expected result: n8n can use up to 4 GB of memory, allowing larger datasets to be processed without crashing.
Paginate API responses
Paginate API responses
Instead of requesting all data at once, use pagination to fetch data in pages. Most APIs support limit and offset parameters. Use a Loop node or the HTTP Request node's built-in pagination feature to fetch one page at a time, process it, and then fetch the next page. This keeps memory usage constant regardless of total dataset size.
1// HTTP Request node pagination settings:2// Pagination Mode: Response Contains Next URL3// or4// Pagination Mode: Offset-Based5// Limit Parameter: limit6// Offset Parameter: offset7// Page Size: 1008// Max Pages: 50Expected result: The HTTP Request node fetches data in pages of 100 items, processing each page before fetching the next. Total memory usage stays low.
Disable saving execution data for large workflows
Disable saving execution data for large workflows
n8n saves the input and output of every node for each execution by default. For workflows that process large datasets, this execution data can be enormous and slow down the database. In the workflow settings, disable Save Successful Execution Data or set it to save only the last N executions. This prevents the execution history from consuming disk space and memory.
Expected result: Execution data is not saved for successful runs, freeing up database space and reducing I/O overhead during large data processing.
Complete working example
1// n8n Code node — Process large API response in memory-efficient chunks2// Use this when you need custom logic beyond what SplitInBatches provides34const items = $input.all();5const BATCH_SIZE = 100;6const results = [];78// Process items in batches9for (let i = 0; i < items.length; i += BATCH_SIZE) {10 const batch = items.slice(i, i + BATCH_SIZE);1112 for (const item of batch) {13 // Extract only the fields you need (reduces memory)14 const processed = {15 id: item.json.id,16 name: item.json.name,17 email: item.json.email,18 status: item.json.status,19 // Add your processing logic here20 processed_at: new Date().toISOString(),21 batch_number: Math.floor(i / BATCH_SIZE) + 122 };2324 results.push({ json: processed });25 }26}2728// Return only the processed results with reduced fields29return results;Common mistakes when handling Large JSON in n8n
Why it's a problem: Trying to process 10,000+ items in a single Code node without batching
How to avoid: Use SplitInBatches before the Code node, or implement batching inside the Code node. Processing everything at once exhausts memory.
Why it's a problem: Passing the full API response through every node in the workflow
How to avoid: Add an Edit Fields node right after the API call to keep only the fields you need. This reduces the payload carried through the rest of the workflow.
Why it's a problem: Setting NODE_OPTIONS higher than available server RAM
How to avoid: Set max-old-space-size to no more than 75% of your server's total RAM. Exceeding available memory causes the OS to swap, which is slower than reducing data size.
Why it's a problem: Not using the HTTP Request node's built-in pagination feature
How to avoid: Many APIs support pagination. Configure the HTTP Request node's Pagination settings instead of building a manual loop with a Loop node.
Best practices
- Always use SplitInBatches when processing more than 500 items to prevent memory spikes
- Strip unnecessary fields with Edit Fields as early as possible in the workflow
- Set a reasonable batch size (50-200 items) based on the complexity of your processing
- Use API pagination instead of fetching all records at once
- Increase NODE_OPTIONS memory only as a last resort — optimize the workflow first
- Disable execution data saving for high-volume workflows to prevent database bloat
- Monitor memory usage with docker stats or top to identify when you are approaching limits
- Consider writing intermediate results to a database instead of keeping everything in memory
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My n8n workflow crashes when processing a large JSON response with thousands of items. How do I use SplitInBatches, reduce payload size, and increase the memory limit to handle large datasets without crashes?
My n8n workflow runs out of memory when processing a large API response. Show me how to use SplitInBatches and the Edit Fields node to process data in smaller chunks.
Frequently asked questions
What is the maximum JSON size n8n can handle?
There is no hard limit, but n8n processes data in memory. The practical limit depends on your server's RAM and the NODE_OPTIONS max-old-space-size setting. With 4 GB of memory allocated, n8n can typically handle JSON payloads up to about 500 MB.
Does SplitInBatches process batches in parallel?
No. SplitInBatches processes one batch at a time sequentially. This is by design to keep memory usage constant. If you need parallel processing, use separate workflows triggered by the Execute Workflow node.
Can I stream large JSON files through n8n?
n8n does not support true streaming. All data passes through nodes as in-memory JSON objects. For very large files, process them outside n8n (e.g., with a database query or a script) and have n8n orchestrate the process.
Why does the n8n editor UI freeze when I open a large execution?
The editor loads all execution data into the browser. Large executions with thousands of items can overwhelm the browser. Disable Save Execution Data for high-volume workflows, or use the API to inspect executions programmatically.
How do I know if my workflow is running out of memory?
Look for 'JavaScript heap out of memory' errors in the n8n logs, or sudden process crashes without error messages. Use docker stats or the top command to monitor real-time memory usage of the n8n process.
Can RapidDev help optimize n8n workflows for large data processing?
Yes. RapidDev can audit your workflows, implement batch processing patterns, configure memory limits, and design scalable data pipelines. Contact RapidDev for a free consultation.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation