Learn how to send user metadata with prompts to Gemini from n8n using a simple, secure workflow that enhances context and personalization.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
You send user metadata to Gemini from n8n by adding it directly into the JSON you send in the HTTP Request node that calls the Gemini API. Gemini supports arbitrary metadata via the generationConfig or system\_instruction fields, or by just embedding it as part of the contents data. n8n does not restrict what JSON you send, so you simply include metadata in the request body along with the prompt.
In real n8n workflows, you typically:
Gemini accepts extra fields without complaint as long as the required fields (contents) are valid. This is the easiest and most stable way to attach metadata for downstream logging, auditing, context control, or user‑specific behaviors.
Below is a fully valid request body you can use inside an n8n HTTP Request node (POST → https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent). It embeds metadata in a custom field userMetadata, and also shows the normal Gemini contents block.
{
"userMetadata": {
"userId": "{{$json.userId}}",
"plan": "{{$json.plan}}",
"language": "{{$json.language}}"
},
"contents": [
{
"role": "user",
"parts": [
{
"text": "{{$json.prompt}}"
}
]
}
],
"generationConfig": {
"temperature": 0.7
}
}
This works because Gemini’s API does not reject unknown top-level fields. n8n will replace the expressions like {{$json.userId}} with actual data from previous nodes.
If you want Gemini to actually use the metadata in reasoning (for personalization, policy, etc.), put it in system\_instruction. This is fully supported and production-safe:
{
"system_instruction": {
"role": "system",
"parts": [
{
"text": "User metadata:\nUser ID: {{$json.userId}}\nPlan: {{$json.plan}}\nLanguage: {{$json.language}}"
}
]
},
"contents": [
{
"role": "user",
"parts": [
{
"text": "{{$json.prompt}}"
}
]
}
]
}
This makes the metadata visible to the LLM itself, which is sometimes necessary (e.g., enforcing per‑plan rules or personalizing tone).
The core idea is simple: you send metadata by including it in the JSON body of your Gemini API call from the HTTP Request node. n8n lets you freely structure the JSON, and Gemini gracefully accepts and forwards any extra fields.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.