Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Enable Dynamic Content Generation Based on User Input in FlutterFlow

Connect a user input form in FlutterFlow to a Firebase Cloud Function that calls the OpenAI or Claude API with a structured system prompt. Store reusable prompt templates in Firestore, pass the user's input as the user message, display the generated content in an editable text field, and let users save the result back to their Firestore collection.

What you'll learn

  • How to build an input form with a prompt field and output-type selector
  • How to call an AI API from a Firebase Cloud Function triggered by FlutterFlow
  • How to store and reuse prompt templates stored in Firestore
  • How to display, edit, and save AI-generated content within the app
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner10 min read30-45 minFlutterFlow Free+March 2026RapidDev Engineering Team
TL;DR

Connect a user input form in FlutterFlow to a Firebase Cloud Function that calls the OpenAI or Claude API with a structured system prompt. Store reusable prompt templates in Firestore, pass the user's input as the user message, display the generated content in an editable text field, and let users save the result back to their Firestore collection.

AI-Powered Content Generation Inside Your FlutterFlow App

FlutterFlow apps can call AI APIs through Firebase Cloud Functions, keeping API keys secure on the server and giving you full control over the system prompt. The most common mistake is forwarding the user's raw text directly to the API without a system prompt — this produces wildly inconsistent output quality and format. Instead, maintain a prompt_templates Firestore collection that maps output types (blog post, product description, email) to pre-written system prompts. The Cloud Function looks up the right template, injects user input as the user message, and streams or returns the AI response. The FlutterFlow UI then displays the result in an editable field so users can refine and save it.

Prerequisites

  • FlutterFlow project with Firebase connected
  • Firebase project with Cloud Functions enabled (Blaze plan required)
  • OpenAI or Anthropic API key stored in Cloud Functions environment config
  • Basic familiarity with FlutterFlow Action Flows and API calls
  • Node.js 18+ for writing the Cloud Function

Step-by-step guide

1

Create the prompt_templates Firestore collection

Open the Firebase console and create a prompt_templates collection. Each document's ID is the output type slug (e.g., blog_post, product_description, support_email). Each document contains: display_name (String — shown in the UI dropdown), system_prompt (String — the full system prompt sent to the AI), max_tokens (Integer — caps response length), temperature (Number — 0.0 to 1.0). Example system prompt for blog_post: 'You are an expert content writer. Write a well-structured blog post with an engaging introduction, three supporting sections, and a conclusion. Use active voice and conversational tone. Output only the blog post text without any preamble.' Storing templates in Firestore means you can update AI behaviour for all users instantly without redeploying code.

Expected result: Firestore shows a prompt_templates collection with at least three template documents visible in the console.

2

Build the content generation form in FlutterFlow

In FlutterFlow, create a new page named GenerateContentPage. Add a TextField widget named promptInput with placeholder text 'Describe what you want to generate...' and set maxLines to 5. Below it add a DropdownButton widget bound to the prompt_templates Firestore collection — display display_name and store the document ID as the selected value in a page state variable selectedTemplateId. Add a Generate button and a separate output area: a multiline TextField named outputField that starts disabled, a CircularProgressIndicator shown while generation is in progress (controlled by a page state boolean isLoading), and a Save button that appears only when outputField has content. Use conditional visibility on all three output-area widgets tied to appropriate page state variables.

Expected result: The page renders with a prompt input, template dropdown, Generate button, and a placeholder output area that updates based on loading state.

3

Deploy the Cloud Function to call the AI API

Create a Cloud Function named generateContent that accepts a POST request with body { templateId: string, userPrompt: string, userId: string }. The function reads the template document from Firestore, calls the OpenAI chat completions endpoint with the template's system_prompt and the user's userPrompt as the user message, and returns { content: string, tokens_used: number }. Set the OpenAI API key using firebase functions:config:set openai.key='sk-...' and read it inside the function with functions.config().openai.key. Deploy with firebase deploy --only functions. In FlutterFlow, add this function's URL as a Custom API Call under API Calls, using POST method, JSON body with templateId, userPrompt, and userId fields, and a $response.content output variable.

functions/src/generateContent.ts
1// functions/src/generateContent.ts
2import * as functions from 'firebase-functions';
3import * as admin from 'firebase-admin';
4import OpenAI from 'openai';
5
6admin.initializeApp();
7const db = admin.firestore();
8
9export const generateContent = functions.https.onRequest(async (req, res) => {
10 res.set('Access-Control-Allow-Origin', '*');
11 if (req.method === 'OPTIONS') { res.status(204).send(''); return; }
12
13 const { templateId, userPrompt, userId } = req.body;
14 if (!templateId || !userPrompt || !userId) {
15 res.status(400).json({ error: 'Missing required fields' });
16 return;
17 }
18
19 const templateSnap = await db.collection('prompt_templates').doc(templateId).get();
20 if (!templateSnap.exists) {
21 res.status(404).json({ error: 'Template not found' });
22 return;
23 }
24 const template = templateSnap.data()!;
25
26 const openai = new OpenAI({ apiKey: functions.config().openai.key });
27 const completion = await openai.chat.completions.create({
28 model: 'gpt-4o-mini',
29 messages: [
30 { role: 'system', content: template.system_prompt },
31 { role: 'user', content: userPrompt },
32 ],
33 max_tokens: template.max_tokens ?? 800,
34 temperature: template.temperature ?? 0.7,
35 });
36
37 const content = completion.choices[0].message.content ?? '';
38 const tokens_used = completion.usage?.total_tokens ?? 0;
39
40 // Log usage to Firestore for cost tracking
41 await db.collection('generation_logs').add({
42 user_id: userId,
43 template_id: templateId,
44 tokens_used,
45 created_at: admin.firestore.FieldValue.serverTimestamp(),
46 });
47
48 res.json({ content, tokens_used });
49});
50

Expected result: firebase deploy succeeds; calling the function URL with a test payload returns a generated content string.

4

Wire the Generate button to the Cloud Function in Action Flow

In FlutterFlow, select the Generate button and open its Action Flow. Add these actions in sequence: (1) Update page state isLoading to true. (2) Call the generateContent API Call, passing promptInput.text as userPrompt, selectedTemplateId as templateId, and Current User UID as userId. (3) If the API response code is 200, set outputField's initial value to response.content and set outputField enabled to true. (4) If the response code is not 200, show a SnackBar with the error message. (5) Update page state isLoading to false. The CircularProgressIndicator shown when isLoading is true gives the user feedback during the typically 2-5 second AI response time.

Expected result: Pressing Generate shows a spinner, then populates the output field with AI-generated text within a few seconds.

5

Add Save and copy controls for generated content

Select the Save button and add an Action Flow: create a new document in a generated_content Firestore collection with fields user_id (Current User UID), template_id (selectedTemplateId), prompt (promptInput.text), content (outputField.text), created_at (server timestamp). After the create action, show a SnackBar confirming save success. Add a secondary IconButton with a copy icon next to the output field — its action calls the Clipboard Copy action on outputField.text. Finally, add a Saved Content page with a ListView bound to the generated_content collection filtered by user_id equals Current User UID, showing each item's template type and a truncated preview of the content.

Expected result: Tapping Save creates a Firestore document; the Saved Content page displays the saved item in the list.

Complete working example

functions/src/generateContent.ts
1import * as functions from 'firebase-functions';
2import * as admin from 'firebase-admin';
3import OpenAI from 'openai';
4
5admin.initializeApp();
6const db = admin.firestore();
7
8const CORS_HEADERS = {
9 'Access-Control-Allow-Origin': '*',
10 'Access-Control-Allow-Methods': 'POST, OPTIONS',
11 'Access-Control-Allow-Headers': 'Content-Type, Authorization',
12};
13
14export const generateContent = functions
15 .runWith({ timeoutSeconds: 60, memory: '256MB' })
16 .https.onRequest(async (req, res) => {
17 Object.entries(CORS_HEADERS).forEach(([k, v]) => res.set(k, v));
18 if (req.method === 'OPTIONS') { res.status(204).send(''); return; }
19 if (req.method !== 'POST') { res.status(405).json({ error: 'Method not allowed' }); return; }
20
21 const { templateId, userPrompt, userId } = req.body;
22 if (!templateId || !userPrompt || !userId) {
23 res.status(400).json({ error: 'templateId, userPrompt, and userId are required' });
24 return;
25 }
26 if (userPrompt.length > 2000) {
27 res.status(400).json({ error: 'Prompt exceeds 2000 character limit' });
28 return;
29 }
30
31 const templateSnap = await db.collection('prompt_templates').doc(templateId).get();
32 if (!templateSnap.exists) {
33 res.status(404).json({ error: 'Template not found' });
34 return;
35 }
36 const template = templateSnap.data()!;
37 if (template.is_active === false) {
38 res.status(400).json({ error: 'This template is currently unavailable' });
39 return;
40 }
41
42 const openai = new OpenAI({ apiKey: functions.config().openai.key });
43
44 let content = '';
45 let tokens_used = 0;
46 try {
47 const completion = await openai.chat.completions.create({
48 model: 'gpt-4o-mini',
49 messages: [
50 { role: 'system', content: template.system_prompt },
51 { role: 'user', content: userPrompt },
52 ],
53 max_tokens: template.max_tokens ?? 800,
54 temperature: template.temperature ?? 0.7,
55 });
56 content = completion.choices[0].message.content ?? '';
57 tokens_used = completion.usage?.total_tokens ?? 0;
58 } catch (err: any) {
59 console.error('OpenAI error:', err.message);
60 res.status(502).json({ error: 'AI service error. Please try again.' });
61 return;
62 }
63
64 await db.collection('generation_logs').add({
65 user_id: userId,
66 template_id: templateId,
67 tokens_used,
68 prompt_length: userPrompt.length,
69 created_at: admin.firestore.FieldValue.serverTimestamp(),
70 });
71
72 res.status(200).json({ content, tokens_used });
73 });

Common mistakes when enabling Dynamic Content Generation Based on User Input in FlutterFlow

Why it's a problem: Sending the user's raw input directly to the AI API without a system prompt

How to avoid: Always include a detailed system prompt that specifies format, tone, length constraints, and what the AI should output. Store these in Firestore so they can be updated without redeploying.

Why it's a problem: Calling the AI API directly from FlutterFlow's built-in API Calls using the raw API key

How to avoid: Route all AI API calls through a Firebase Cloud Function. Store the API key in Firebase Functions environment config, not in FlutterFlow's API settings.

Why it's a problem: Not capping the max_tokens parameter on AI responses

How to avoid: Set max_tokens per template in Firestore (e.g., 400 for emails, 800 for blog posts) and pass it to the API call. Also validate prompt length on the server before calling the API.

Why it's a problem: Showing a disabled text field for the output instead of an editable one

How to avoid: Render the output in an enabled, multiline TextField from the start. Users can edit the content in-place and then save the final version.

Best practices

  • Store all system prompts in Firestore rather than hardcoding them — this lets you improve output quality without redeploying the app.
  • Always validate prompt length on the server (max 2,000 characters) to prevent abuse and unexpected API costs.
  • Log every generation to a Firestore generation_logs collection with user ID, template, token count, and timestamp for cost monitoring.
  • Show a loading indicator during generation — AI responses typically take 2-8 seconds and users will think the app is broken without feedback.
  • Make the output field editable so users can refine generated content before saving.
  • Use firebase functions:config for API keys — never put keys in FlutterFlow's API Call headers or in client-side code.
  • Rate-limit the Cloud Function per user (e.g., max 20 generations per day) by checking the generation_logs count before calling the AI.
  • Test all templates with edge-case prompts (very short, very long, non-English, offensive) before shipping to ensure the system prompt handles them gracefully.

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I am building a dynamic content generation feature in a FlutterFlow app backed by Firebase Cloud Functions. The Cloud Function reads a system prompt from a Firestore prompt_templates collection and calls the OpenAI API. Write the complete Cloud Function in TypeScript that: reads the template by ID, validates the user prompt length (max 2,000 chars), calls gpt-4o-mini with the template's system_prompt and temperature, logs usage to a generation_logs collection, and returns the generated content with CORS headers for FlutterFlow.

FlutterFlow Prompt

In FlutterFlow, I have a Cloud Function API Call named generateContent that returns a JSON object with a content field. Build the Action Flow for a Generate button that: sets page state isLoading to true, calls the generateContent API with the prompt TextField's value and a Dropdown's selected value, on success sets an output TextField's text to the response content field and enables the TextField, on failure shows a SnackBar with the error, then sets isLoading to false.

Frequently asked questions

Can I use Claude (Anthropic) instead of OpenAI in the Cloud Function?

Yes. Replace the OpenAI SDK with @anthropic-ai/sdk. The call structure is similar: pass the system prompt as the system parameter and the user message in the messages array. Store the Anthropic API key in firebase functions:config:set anthropic.key='sk-ant-...' and update the function to read it from there.

How do I prevent users from generating offensive or harmful content?

Add OpenAI's moderation endpoint as a pre-flight check inside the Cloud Function. Call POST https://api.openai.com/v1/moderations with the user prompt before passing it to the completion API. If the moderation result flags the content, return a 400 error to FlutterFlow and show an appropriate message.

Why does the generation sometimes time out in FlutterFlow?

FlutterFlow API calls have a default timeout of 10 seconds, but GPT-4 or Claude calls on long prompts can take 15-30 seconds. Set the Cloud Function's timeout to 60 seconds in functions.runWith({ timeoutSeconds: 60 }), and on the FlutterFlow side increase the API call timeout in the API Call settings to 30,000 ms.

Can I stream the AI response word-by-word like ChatGPT does?

FlutterFlow's standard API Calls do not support streaming responses. To stream, you need a Custom Action that opens an HTTP streaming connection using Dart's http package and updates a page state variable character by character. This requires code export or a Custom Action with full Dart access.

How do I add a content generation counter or paywall?

Store a generations_used integer on the user's Firestore profile document. Increment it in the Cloud Function after each successful generation. In FlutterFlow, read this value on app start and store it in App State. Check it before showing the Generate button — if it exceeds the free limit, redirect to an upgrade screen instead.

What is the cheapest AI model to use for this feature?

GPT-4o-mini is the best price-to-quality ratio for most content tasks at roughly $0.15 per million input tokens. Claude Haiku is similarly priced. Avoid GPT-4 or Claude Opus for simple content generation — they cost 20x more and the quality difference is minimal for structured output tasks.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.