The SplitInBatches node in n8n processes large datasets in configurable batch sizes to avoid memory issues and API rate limits. Connect its done output back to the processing loop and use the batch size parameter to control how many items are processed per iteration. This pattern is essential for workflows that handle hundreds or thousands of items.
How to Use the SplitInBatches Node in n8n
When your workflow processes large numbers of items, sending them all at once can cause memory issues, trigger API rate limits, or overwhelm downstream services. The SplitInBatches node solves this by dividing your data into smaller groups and processing each group sequentially. It creates a loop pattern where each batch is processed and then the next batch begins, continuing until all items are handled.
Prerequisites
- A running n8n instance with the workflow editor open
- A workflow that produces multiple items (10+) that need sequential processing
- Basic understanding of n8n node connections and data flow
Step-by-step guide
Add the SplitInBatches node to your workflow
Add the SplitInBatches node to your workflow
After the node that produces your large dataset, add a SplitInBatches node. Click the plus button, search for SplitInBatches, and add it. Connect it to the node that outputs the items you want to batch. The SplitInBatches node has two outputs: the main output sends one batch at a time, and the second output sends nothing until all batches are processed.
Expected result: The SplitInBatches node appears in your workflow connected to the data source node, with two output connectors visible.
Configure the batch size
Configure the batch size
Click on the SplitInBatches node to open its settings. Set the Batch Size parameter to the number of items you want to process in each iteration. The default is 10. Choose a batch size based on your downstream node requirements — if you are calling an API with a rate limit of 5 requests per second, set the batch size to 5 and add a Wait node between batches.
1// SplitInBatches configuration2// Batch Size: 10 (processes 10 items per loop iteration)3// Options:4// Reset: false (default - processes each item once)5//6// If you have 100 items and batch size is 10,7// the node will loop 10 times, processing 10 items each time.Expected result: The batch size is configured in the node settings. The preview shows how many items will be in each batch based on your input data.
Build the processing loop
Build the processing loop
Connect the first output (batch output) of SplitInBatches to the nodes that process each batch. At the end of your processing chain, connect the last processing node back to the SplitInBatches node input. This creates a loop. The SplitInBatches node keeps track of which items have been processed and sends the next batch each time it is called, until all items are done.
1// Loop structure:2//3// [Data Source] → [SplitInBatches] → [Process Node] → [API Call] ─┐4// ↑ │5// └──────────────────────────────────────────┘6// (loop back connection)7//8// When all batches are processed, use the second output9// of SplitInBatches for the "done" path.Expected result: The workflow canvas shows a visible loop from the last processing node back to the SplitInBatches node. The loop connection is typically shown as a curved line going backward.
Handle the completion output
Handle the completion output
The second output of the SplitInBatches node fires once after all batches have been processed. Connect this output to any node that should run after all items are done, such as a notification, summary report, or cleanup step. This output does not pass any item data by default, so use a Code node or reference other nodes if you need the processed results.
Expected result: After all batches are processed, the workflow continues from the second output to your completion nodes.
Add a Wait node for rate-limited APIs
Add a Wait node for rate-limited APIs
If your processing loop calls a rate-limited API, add a Wait node between the API call and the loop-back connection to SplitInBatches. Set the wait time to match the rate limit window. For example, if the API allows 5 requests per second, set a 1-second wait after each batch of 5. This prevents 429 rate limit errors.
1// Loop structure with Wait node:2//3// [SplitInBatches (size: 5)] → [HTTP Request] → [Wait (1 second)] ─┐4// ↑ │5// └──────────────────────────────────────────────────────┘6//7// Wait node settings:8// Resume: After Time Interval9// Amount: 110// Unit: SecondsExpected result: The workflow pauses for the configured time between each batch, staying within the API rate limit and avoiding 429 errors.
Complete working example
1{2 "name": "SplitInBatches — Bulk API Update",3 "nodes": [4 {5 "parameters": {},6 "name": "Manual Trigger",7 "type": "n8n-nodes-base.manualTrigger",8 "typeVersion": 1,9 "position": [250, 300]10 },11 {12 "parameters": {13 "jsCode": "// Generate 50 sample items to process\nconst items = [];\nfor (let i = 1; i <= 50; i++) {\n items.push({\n json: {\n id: i,\n name: `User ${i}`,\n email: `user${i}@example.com`,\n status: 'pending'\n }\n });\n}\nreturn items;"14 },15 "name": "Generate 50 Items",16 "type": "n8n-nodes-base.code",17 "typeVersion": 2,18 "position": [450, 300]19 },20 {21 "parameters": {22 "batchSize": 10,23 "options": {}24 },25 "name": "SplitInBatches",26 "type": "n8n-nodes-base.splitInBatches",27 "typeVersion": 3,28 "position": [650, 300]29 },30 {31 "parameters": {32 "jsCode": "// Process each batch — update status\nconst items = $input.all();\nconst processed = items.map(item => ({\n json: {\n ...item.json,\n status: 'processed',\n processedAt: new Date().toISOString(),\n batchNumber: Math.ceil(item.json.id / 10)\n }\n}));\nreturn processed;"33 },34 "name": "Process Batch",35 "type": "n8n-nodes-base.code",36 "typeVersion": 2,37 "position": [850, 300]38 },39 {40 "parameters": {41 "amount": 1,42 "unit": "seconds"43 },44 "name": "Wait 1 Second",45 "type": "n8n-nodes-base.wait",46 "typeVersion": 1.1,47 "position": [1050, 300]48 },49 {50 "parameters": {51 "jsCode": "return [{\n json: {\n message: 'All batches processed successfully',\n completedAt: new Date().toISOString()\n }\n}];"52 },53 "name": "All Done",54 "type": "n8n-nodes-base.code",55 "typeVersion": 2,56 "position": [850, 500]57 }58 ],59 "connections": {60 "Manual Trigger": { "main": [[{ "node": "Generate 50 Items", "type": "main", "index": 0 }]] },61 "Generate 50 Items": { "main": [[{ "node": "SplitInBatches", "type": "main", "index": 0 }]] },62 "SplitInBatches": {63 "main": [64 [{ "node": "Process Batch", "type": "main", "index": 0 }],65 [{ "node": "All Done", "type": "main", "index": 0 }]66 ]67 },68 "Process Batch": { "main": [[{ "node": "Wait 1 Second", "type": "main", "index": 0 }]] },69 "Wait 1 Second": { "main": [[{ "node": "SplitInBatches", "type": "main", "index": 0 }]] }70 }71}Common mistakes when using the SplitInBatches Node in n8n for Batch Processing
Why it's a problem: Forgetting to connect the loop back from the last processing node to SplitInBatches
How to avoid: Draw a connection from the last node in your processing chain back to the SplitInBatches node input. Without this loop connection, only the first batch is processed.
Why it's a problem: Connecting the done output to the processing chain instead of the main output
How to avoid: The first output (top) sends batch data for processing. The second output (bottom) fires only when all batches are complete. Connect processing nodes to the first output and completion nodes to the second.
Why it's a problem: Not adding a Wait node when calling rate-limited APIs, causing 429 errors
How to avoid: Add a Wait node between the API call and the loop-back connection. Set the wait time based on the API rate limit documentation.
Why it's a problem: Setting the batch size too large, defeating the purpose of batching
How to avoid: Use smaller batch sizes (5-20) for API calls. Larger sizes (50-100) are acceptable for in-memory processing where rate limits are not a concern.
Best practices
- Choose batch sizes that align with API rate limits — if the limit is 10 requests per minute, use batch size 10 with a 60-second wait
- Always add a Wait node in the loop when calling external APIs to prevent rate limiting
- Use the done output (second output) for completion notifications or summary reports
- Monitor memory usage when processing very large datasets — even with batching, accumulating results can consume memory
- Store intermediate results using static data or an external database rather than accumulating them in the workflow
- Test your batch workflow with a small dataset first before running it against thousands of items
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I have an n8n workflow that needs to process 500 records through an API that has a rate limit of 10 requests per second. How should I configure the SplitInBatches node and add wait times to stay within the limit?
Create a workflow that reads 200 contacts from a Google Sheet, splits them into batches of 10, sends each batch to an HTTP API endpoint with a 2-second delay between batches, and sends a Slack notification when all batches are done.
Frequently asked questions
What is the default batch size in the SplitInBatches node?
The default batch size is 10 items per batch. You can change this to any positive integer in the node settings.
Does SplitInBatches process batches in parallel or sequentially?
SplitInBatches processes batches sequentially. Each batch completes before the next one begins. This is by design to prevent overwhelming APIs and to stay within rate limits.
What does the Reset option do in SplitInBatches?
The Reset option clears the internal counter that tracks which items have been processed. Enable it if you need to reprocess all items from the beginning, such as when the node is triggered multiple times in the same execution.
Can I use SplitInBatches with the IF node inside the loop?
Yes. You can add any nodes inside the loop including IF nodes, Switch nodes, and error handling. Just make sure all branches eventually connect back to the SplitInBatches node to continue the loop.
How do I collect results from all batches after processing?
The SplitInBatches node does not accumulate results automatically. Use $getWorkflowStaticData('global') to store results from each batch, or write results to an external database or spreadsheet during the loop.
What happens if one batch fails in the middle of processing?
By default, the workflow stops on error. Add error handling nodes or enable the Continue on Error option on the failing node to skip errors and continue with the next batch. Check the execution history to identify which items failed.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation