Re-running an entire n8n workflow just to test how you handle an OpenAI response wastes time and API credits. Use n8n's data pinning feature to freeze the output of the OpenAI node, then iterate on downstream nodes without making new API calls. You can also use manual test data in Code nodes to simulate various response shapes.
Why You Should Avoid Re-Running Full Workflows to Test LLM Responses
Every time you execute a full workflow that includes an OpenAI or Anthropic call, you spend API credits and wait for the response. When you are building and debugging the nodes that come after the LLM call — parsing JSON, formatting messages, routing with IF nodes — you do not need a fresh API response. n8n's data pinning feature lets you freeze a node's output so downstream nodes use the pinned data instead of re-executing the node. This tutorial shows you how to pin LLM responses, create test fixtures, and iterate rapidly.
Prerequisites
- A running n8n instance (v1.20 or later)
- An existing workflow with at least one OpenAI or LLM node
- Basic familiarity with the n8n editor interface
Step-by-step guide
Run the workflow once to capture a real OpenAI response
Run the workflow once to capture a real OpenAI response
Execute your full workflow at least once so the OpenAI node produces real output. Click the OpenAI node to open its output panel. You should see the response data including the message content, token usage, and finish reason. This real response will serve as your baseline test data. Verify it looks correct before pinning.
Expected result: The OpenAI node shows its output data with the full API response in the output panel.
Pin the OpenAI node output
Pin the OpenAI node output
With the OpenAI node's output panel open, click the pin icon (thumbtack) in the top-right corner of the output panel. The icon turns blue and a 'Pinned' badge appears on the node in the canvas. When data is pinned, executing the workflow skips this node entirely and uses the pinned data as its output. All downstream nodes receive the pinned data as if the API call just happened.
Expected result: The node displays a blue pin icon and a 'Pinned' badge. Subsequent workflow executions skip the API call.
Edit pinned data to test different response shapes
Edit pinned data to test different response shapes
Click the pinned node, then click the pencil icon next to the pin icon to edit the pinned data. You can modify the JSON directly to simulate different scenarios: an empty response, a malformed JSON string, a response with finish_reason set to 'length' (indicating truncation), or a response with unexpected formatting. Save the edited data and re-run the workflow to test how downstream nodes handle each case.
1[2 {3 "message": {4 "role": "assistant",5 "content": "{\"name\": \"Test User\", \"score\": 85}"6 },7 "finish_reason": "stop",8 "usage": {9 "prompt_tokens": 150,10 "completion_tokens": 25,11 "total_tokens": 17512 }13 }14]Expected result: The workflow uses your edited test data, letting you verify how downstream nodes handle different response formats.
Create a Code node with test fixtures for edge cases
Create a Code node with test fixtures for edge cases
For more structured testing, add a Code node that outputs different test scenarios based on a test flag. Temporarily connect it in place of the OpenAI node. Define multiple response shapes: a normal response, an empty response, a truncated response, and a response with invalid JSON in the content field. Use an environment variable or static data flag to select which fixture to output.
1const testCase = 'truncated'; // Change this to test different scenarios23const fixtures = {4 normal: {5 message: { role: 'assistant', content: 'This is a normal response with useful content.' },6 finish_reason: 'stop'7 },8 empty: {9 message: { role: 'assistant', content: '' },10 finish_reason: 'stop'11 },12 truncated: {13 message: { role: 'assistant', content: 'This response was cut off because the token lim' },14 finish_reason: 'length'15 },16 malformed_json: {17 message: { role: 'assistant', content: '{"name": "Test", "data":' },18 finish_reason: 'stop'19 }20};2122return [{ json: fixtures[testCase] }];Expected result: The Code node outputs the selected test fixture, allowing you to test downstream logic without any API calls.
Unpin data and reconnect real nodes for production testing
Unpin data and reconnect real nodes for production testing
Once you are confident that downstream nodes handle all edge cases correctly, unpin the OpenAI node by clicking the pin icon again (it turns gray). Remove or disconnect any test fixture Code nodes. Run the full workflow once more with a real API call to verify everything works end-to-end. Save the workflow.
Expected result: The workflow executes the real OpenAI API call and all downstream nodes process the live response correctly.
Complete working example
1// Code node: LLM response test fixtures2// Use this node to simulate different OpenAI response scenarios3// without making real API calls45// Set the test case to simulate:6// 'normal', 'empty', 'truncated', 'malformed_json', 'error', 'long'7const testCase = 'normal';89const fixtures = {10 normal: {11 message: {12 role: 'assistant',13 content: 'Based on the data provided, the top three recommendations are: 1) Increase the batch size to 50, 2) Enable caching on the API gateway, 3) Add retry logic for transient failures.'14 },15 finish_reason: 'stop',16 usage: { prompt_tokens: 200, completion_tokens: 45, total_tokens: 245 }17 },18 empty: {19 message: { role: 'assistant', content: '' },20 finish_reason: 'stop',21 usage: { prompt_tokens: 200, completion_tokens: 0, total_tokens: 200 }22 },23 truncated: {24 message: {25 role: 'assistant',26 content: 'The analysis shows that performance degrades significantly when the input exceeds'27 },28 finish_reason: 'length',29 usage: { prompt_tokens: 200, completion_tokens: 4096, total_tokens: 4296 }30 },31 malformed_json: {32 message: {33 role: 'assistant',34 content: '{"result": "success", "items": [{"id": 1, "name":'35 },36 finish_reason: 'stop',37 usage: { prompt_tokens: 200, completion_tokens: 30, total_tokens: 230 }38 },39 error: {40 error: { message: 'Rate limit exceeded', type: 'rate_limit_error', code: 429 }41 },42 long: {43 message: {44 role: 'assistant',45 content: 'A'.repeat(10000)46 },47 finish_reason: 'stop',48 usage: { prompt_tokens: 200, completion_tokens: 3000, total_tokens: 3200 }49 }50};5152const output = fixtures[testCase];53if (!output) throw new Error(`Unknown test case: ${testCase}`);5455return [{ json: output }];Common mistakes when testing Responses from OpenAI in n8n Without Re-Running the Whole Workflow
Why it's a problem: Forgetting to unpin nodes before activating the workflow for production
How to avoid: Add a checklist step to your deployment process: verify no nodes have the blue pin badge before activation.
Why it's a problem: Pinning data on a trigger node (Webhook, Schedule) instead of the LLM node
How to avoid: Pin the specific node whose output you want to freeze. Trigger nodes need to fire to start the execution.
Why it's a problem: Not testing the empty response scenario
How to avoid: Always pin an empty content string to verify your downstream nodes handle it gracefully with a fallback message.
Why it's a problem: Editing pinned data but not saving before re-running
How to avoid: After editing pinned data, click the Save button in the data editor, then execute the workflow.
Best practices
- Always pin data after a successful execution so you have a known-good baseline
- Create test fixtures for at least four scenarios: normal, empty, truncated, and malformed
- Use the finish_reason field to test truncation handling in downstream nodes
- Label pinned nodes clearly in the canvas so team members know they are using test data
- Unpin all nodes before deploying a workflow to production
- Store reusable test fixtures in a separate test workflow that you can copy from
- Test with realistic token counts to verify your billing and usage tracking logic
- Use the execution history to compare pinned vs live results side by side
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm building an n8n workflow with OpenAI. Every time I test changes to the nodes after the OpenAI call, I have to re-run the whole workflow and burn API credits. How can I use data pinning to avoid this?
Show me how to pin the output of an OpenAI node in n8n so I can test downstream nodes without making API calls. Also create a Code node with test fixtures for normal, empty, truncated, and error responses.
Frequently asked questions
Does pinning data in n8n save API credits?
Yes. When a node's output is pinned, n8n skips executing that node entirely. No API call is made, so no credits or tokens are consumed. This is the primary benefit of pinning for LLM workflows.
Can I pin data on multiple nodes at once?
Yes. You can pin output on any number of nodes in a workflow. This is useful when you have multiple API calls and want to skip all of them during downstream testing.
Does pinned data persist after I close and reopen n8n?
Yes. Pinned data is saved as part of the workflow configuration and persists across sessions. It remains pinned until you explicitly unpin it.
Can I export pinned data to share with my team?
Pinned data is included when you export a workflow as JSON. When a team member imports the workflow, the pinned data is preserved.
How do I test error scenarios if the OpenAI node did not produce an error?
Use a Code node with test fixtures to simulate error responses. You can also edit pinned data to match the structure of an error response from the API.
Can RapidDev help optimize my n8n testing workflow?
Yes. RapidDev can set up structured testing patterns for your n8n workflows, including reusable test fixtures, automated validation, and CI/CD integration for workflow testing.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation