Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Fix n8n Webhook Not Delivering Messages to the LLM Node

When an n8n webhook receives data but the LLM node gets nothing, the problem is usually incorrect data mapping between nodes, the webhook body being nested inside a wrapper object, or the LLM node expecting a specific field name that does not match the webhook output. Fix it by adding a Code node to reshape the webhook data, verifying field paths with the expression editor, and using data pinning to test the pipeline step by step.

What you'll learn

  • How to inspect the exact data structure output by the Webhook node
  • How to map webhook fields to the LLM node's expected input format
  • How to use a Code node to transform webhook data for LLM consumption
  • How to debug data flow using execution history and data pinning
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced11 min read20-30 minutesn8n 1.20+ (self-hosted and Cloud)March 2026RapidDev Engineering Team
TL;DR

When an n8n webhook receives data but the LLM node gets nothing, the problem is usually incorrect data mapping between nodes, the webhook body being nested inside a wrapper object, or the LLM node expecting a specific field name that does not match the webhook output. Fix it by adding a Code node to reshape the webhook data, verifying field paths with the expression editor, and using data pinning to test the pipeline step by step.

Debugging Data Flow From Webhook to LLM Nodes in n8n

A webhook fires and n8n shows a successful execution, but the LLM node either produces no output, throws an error about missing input, or generates a response that ignores the user's message entirely. The root cause is almost always a data mapping mismatch: the webhook output structure does not match what the LLM node expects as input. This tutorial shows you how to trace data through each node, identify where the mapping breaks, and fix it.

Prerequisites

  • A running n8n workflow with a Webhook trigger node and an LLM node (AI Agent, Basic LLM Chain, or similar)
  • A tool to send test webhook requests (curl, Postman, or your application)
  • Basic understanding of JSON data structures
  • Familiarity with n8n execution history and the expression editor

Step-by-step guide

1

Inspect the Webhook node output to see the exact data structure

Send a test request to your webhook and open the execution in n8n. Click on the Webhook node and examine the output panel. Switch to JSON view to see the raw data structure. Pay attention to how the request body is nested. Depending on your Webhook node configuration, the body might be at $json.body, directly at $json, or nested under $json.body.data. The HTTP Method setting and the Response Mode setting also affect the output structure. If the webhook is set to receive form data vs JSON, the output structure differs. Note the exact field names and paths that contain the user's message and any other data you need to pass to the LLM node.

typescript
1// Common Webhook output structures:
2
3// JSON body (Content-Type: application/json):
4// $json.body.message, $json.body.userId, etc.
5
6// When 'Raw Body' is disabled (default):
7// $json = { headers: {...}, params: {...}, query: {...}, body: {...} }
8
9// When 'Raw Body' is enabled:
10// $json = { body: "raw string", headers: {...} }
11
12// Form data (Content-Type: application/x-www-form-urlencoded):
13// $json.body.field1, $json.body.field2

Expected result: You can see the exact structure of the webhook data including where the user's message is located in the JSON.

2

Check the LLM node's input requirements

Click on the LLM node (AI Agent, Basic LLM Chain, etc.) and examine what input fields it expects. The AI Agent node has a Prompt field that takes the user message. The Basic LLM Chain node has a Prompt field as well. If you are using a sub-node configuration, the AI Agent expects the input to contain a chatInput or similar field in the incoming data. Open the node settings and check each input field to see what expression is used. If the field contains an expression like {{ $json.chatInput }} but the webhook outputs the message at $json.body.message, there is a mismatch. The expression will resolve to undefined and the LLM will receive no prompt, producing empty or generic output.

Expected result: You know exactly which field names and paths the LLM node expects, and can compare them against the webhook output.

3

Add a Code node to reshape webhook data for the LLM node

Insert a Code node between the Webhook and the LLM node that transforms the webhook output into the format the LLM node expects. This is the most reliable fix because it explicitly maps fields regardless of how the webhook structures its output. Set the Code node to Run Once for All Items. Extract the user message from wherever it exists in the webhook data and output it as the field the LLM node expects. Also pass through any other fields needed downstream, like session IDs or metadata. This transformation layer decouples the webhook format from the LLM format, making the workflow resilient to changes on either side.

typescript
1// Code node: Webhook to LLM Data Mapper
2// Mode: Run Once for All Items
3
4const items = $input.all();
5const results = [];
6
7for (const item of items) {
8 const data = item.json;
9 const body = data.body || data;
10
11 // Extract user message from common locations
12 const userMessage = body.message
13 || body.text
14 || body.content
15 || body.prompt
16 || body.query
17 || body.input
18 || '';
19
20 if (!userMessage) {
21 console.log('WARNING: No user message found. Available keys:', Object.keys(body));
22 }
23
24 // Output in the format the AI Agent expects
25 results.push({
26 json: {
27 chatInput: userMessage,
28 sessionId: body.sessionId || body.userId || 'default',
29 metadata: {
30 source: 'webhook',
31 timestamp: new Date().toISOString(),
32 ip: (data.headers || {})['x-forwarded-for'] || 'unknown'
33 }
34 }
35 });
36}
37
38return results;

Expected result: The Code node outputs data with the correct field names that the LLM node expects, including chatInput for the AI Agent.

4

Verify the connection between nodes carries data correctly

Check that nodes are connected with the correct output-input mapping. In n8n, each node connection carries all output items to the next node's input. However, some nodes have multiple outputs (like the IF node or Switch node), and connecting the wrong output port means no data flows through. Click on the connection line between nodes to verify it is the correct output. Also verify that the LLM node is not configured to receive input from a different source. In the AI Agent node, the main input connector expects the user prompt. If you accidentally connected to a tool input or memory input, the user message will not reach the agent's prompt. Right-click the connection and select Delete, then re-connect from the correct output to the correct input.

Expected result: The connection between the data transformation node and the LLM node uses the correct output and input ports.

5

Test the pipeline with data pinning

Use n8n's data pinning feature to isolate and test each section of the pipeline. First, send a test webhook request and run the workflow. Then pin the output of the Webhook node by clicking the pin icon. Now you can run just the downstream nodes without needing to send another webhook request. Pin the Code node's output too, then test the LLM node in isolation. This step-by-step approach reveals exactly where data gets lost or transformed incorrectly. Unpinning automatically when you need fresh data is done by clicking the pin icon again.

Expected result: You can test each node individually with pinned data and confirm that data flows correctly through the entire pipeline.

6

Handle edge cases with empty or malformed webhook bodies

Not all webhook requests will have a properly formatted body. Add validation in your data mapping Code node to handle missing fields, empty bodies, wrong content types, and unexpected data formats. When validation fails, return a clear error message via the Respond to Webhook node instead of passing empty data to the LLM. This prevents wasted LLM API calls and gives the webhook caller actionable feedback about what went wrong with their request.

typescript
1// Code node: Validate Webhook Input
2const items = $input.all();
3
4for (const item of items) {
5 const body = item.json.body || item.json;
6 const message = body.message || body.text || body.content || '';
7
8 if (!message || message.trim() === '') {
9 // Return error response
10 item.json.isValid = false;
11 item.json.error = 'Missing required field: message. Send a JSON body with a "message" field.';
12 item.json.chatInput = '';
13 } else if (message.length > 10000) {
14 item.json.isValid = false;
15 item.json.error = 'Message too long. Maximum 10,000 characters.';
16 item.json.chatInput = '';
17 } else {
18 item.json.isValid = true;
19 item.json.error = null;
20 item.json.chatInput = message.trim();
21 }
22}
23
24return items;
25
26// Follow with IF node: {{ $json.isValid }} === true
27// True → AI Agent node
28// False → Respond to Webhook with error

Expected result: Invalid or empty webhook requests return helpful error messages instead of being sent to the LLM node.

Complete working example

webhook-to-llm-mapper.js
1// Code node: Production Webhook-to-LLM Data Mapper
2// Mode: Run Once for All Items
3// Place between Webhook node and AI Agent node
4
5const items = $input.all();
6const results = [];
7
8// Supported message field names from common platforms
9const MESSAGE_FIELDS = [
10 'message', 'text', 'content', 'prompt',
11 'query', 'input', 'msg', 'question',
12 'user_message', 'userMessage', 'chatInput'
13];
14
15const SESSION_FIELDS = [
16 'sessionId', 'session_id', 'userId',
17 'user_id', 'chat_id', 'chatId',
18 'thread_id', 'threadId'
19];
20
21function findField(obj, fieldNames) {
22 for (const field of fieldNames) {
23 // Check top level
24 if (obj[field] !== undefined && obj[field] !== null) {
25 return { value: String(obj[field]), source: field };
26 }
27 // Check nested in body
28 if (obj.body && obj.body[field] !== undefined) {
29 return { value: String(obj.body[field]), source: 'body.' + field };
30 }
31 // Check nested in data
32 if (obj.data && obj.data[field] !== undefined) {
33 return { value: String(obj.data[field]), source: 'data.' + field };
34 }
35 }
36 return null;
37}
38
39for (const item of items) {
40 const data = item.json;
41
42 // Find the message
43 const messageResult = findField(data, MESSAGE_FIELDS);
44 const sessionResult = findField(data, SESSION_FIELDS);
45
46 const output = {
47 chatInput: '',
48 sessionId: 'default',
49 isValid: false,
50 validationError: null,
51 debug: {
52 availableKeys: Object.keys(data),
53 bodyKeys: data.body ? Object.keys(data.body) : [],
54 messageSource: null,
55 sessionSource: null
56 }
57 };
58
59 if (messageResult) {
60 output.chatInput = messageResult.value.trim();
61 output.debug.messageSource = messageResult.source;
62 output.isValid = output.chatInput.length > 0;
63 }
64
65 if (sessionResult) {
66 output.sessionId = sessionResult.value;
67 output.debug.sessionSource = sessionResult.source;
68 }
69
70 if (!output.isValid) {
71 output.validationError = messageResult
72 ? 'Message field is empty'
73 : 'No message field found. Send JSON with one of: '
74 + MESSAGE_FIELDS.join(', ');
75 }
76
77 results.push({ json: output });
78}
79
80return results;

Common mistakes when fixing n8n Webhook Not Delivering Messages to the LLM Node

Why it's a problem: Connecting the webhook output to the AI Agent's tool input instead of the main input

How to avoid: Delete the connection and reconnect from the upstream node's output to the AI Agent's main (top) input connector.

Why it's a problem: Assuming the webhook body is at $json when it is actually nested at $json.body

How to avoid: Inspect the Webhook node output in JSON view to see the exact structure. Use a Code node to extract data from the correct path.

Why it's a problem: Using the production webhook URL during testing without activating the workflow

How to avoid: Use the test URL (ending in /webhook-test/) during development. It works without workflow activation.

Why it's a problem: Not handling the case where the webhook caller sends an empty body or wrong content type

How to avoid: Add validation in the Code node that checks for empty messages and returns an error via Respond to Webhook.

Why it's a problem: Referencing $json.message when the AI Agent node expects chatInput as the field name

How to avoid: Map the webhook's message field to chatInput in the transformation Code node so the AI Agent receives it correctly.

Best practices

  • Always add a data transformation Code node between the Webhook and LLM nodes to decouple their data formats
  • Use the expression editor variable selector to verify available fields instead of guessing path names
  • Set the Webhook Response Mode to 'Using Respond to Webhook Node' for custom error handling
  • Validate webhook input before sending to the LLM to avoid wasted API calls on empty or malformed requests
  • Use data pinning during development to test each node in isolation without resending webhook requests
  • Log available JSON keys in the transformation Code node to quickly identify field names from new callers
  • Use the webhook test URL during development and switch to the production URL only when the workflow is activated
  • Document the expected webhook payload format so API consumers know which fields to send

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n webhook receives a JSON body with a 'message' field but the AI Agent node does not see the message. The execution shows the Webhook has data but the AI Agent output is generic. How do I map the webhook body to the AI Agent's expected input format?

n8n Prompt

Add a Code node between my Webhook and AI Agent that extracts the 'message' field from the webhook body, validates it, and outputs it as 'chatInput' for the AI Agent. Include error handling for missing or empty messages.

Frequently asked questions

Why does the Webhook node show data but the next node shows nothing?

Check the connection between nodes. Hover over the connectors to verify they are the correct ports. Some nodes like IF and Switch have multiple outputs, and connecting the wrong one means no data flows on your expected path.

What field name does the AI Agent node expect for the user message?

The AI Agent node typically reads from the chatInput field in the incoming data. If your upstream node outputs the message under a different field name like 'message' or 'text', add a Code node to rename it to chatInput.

Why does the LLM produce a response that ignores my webhook message?

The LLM received an empty or undefined prompt because the field mapping is wrong. It fell back to responding based only on the system prompt. Check that the expression in the LLM node's prompt field correctly references the webhook's message field.

How do I handle webhooks from different callers with different body structures?

Use the flexible field-finding pattern in the Code node that checks multiple possible field names. The complete code example above demonstrates checking common field names like message, text, content, and prompt.

Can I see the raw HTTP request body in n8n?

Enable the Raw Body option in the Webhook node settings. This adds a rawBody field to the output that contains the exact string received. Useful for debugging content type and encoding issues.

Why does my webhook work with curl but not from my application?

Check the Content-Type header. n8n's Webhook node expects application/json for JSON parsing. If your application sends a different content type, the body may not be parsed correctly. Also verify CORS settings if calling from a browser.

Does the Webhook node preserve the original data types?

JSON data types (strings, numbers, booleans, arrays, objects) are preserved. However, form-encoded data is always received as strings. If you need a number, parse it in a Code node with parseInt or parseFloat.

Can RapidDev help build webhook-to-LLM pipelines in n8n?

Yes, RapidDev specializes in building robust n8n integrations with proper data mapping, validation, and error handling between webhook triggers and LLM processing nodes.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.