Skip to main content
RapidDev - Software Development Agency
v0-integrationsNext.js API Route

How to Integrate OpenAI GPT with V0

To use OpenAI GPT with V0 by Vercel, generate your chat UI in V0, then add a Next.js API route at app/api/chat/route.ts using the openai npm package. Store your OPENAI_API_KEY in Vercel Dashboard environment variables. The Vercel AI SDK makes it easy to stream responses so users see output token-by-token, exactly like ChatGPT.

What you'll learn

  • How to generate a chat or completion UI using V0 prompts
  • How to create a secure Next.js API route that calls the OpenAI API
  • How to stream GPT responses to the browser using the Vercel AI SDK
  • How to store your OPENAI_API_KEY safely in Vercel environment variables
  • How to deploy and test your AI-powered app on Vercel
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate13 min read20 minutesAI/MLMarch 2026RapidDev Engineering Team
TL;DR

To use OpenAI GPT with V0 by Vercel, generate your chat UI in V0, then add a Next.js API route at app/api/chat/route.ts using the openai npm package. Store your OPENAI_API_KEY in Vercel Dashboard environment variables. The Vercel AI SDK makes it easy to stream responses so users see output token-by-token, exactly like ChatGPT.

Adding OpenAI GPT to Your V0-Generated Next.js App

V0 by Vercel is one of the fastest ways to build a polished React UI for an AI-powered product. With a single prompt you can generate a chat interface, a content generator, or a Q&A widget — but V0 only generates the frontend. To actually call GPT, you need a server-side API route so your OpenAI secret key never appears in client-side JavaScript.

Next.js API routes (under app/api/) run as serverless functions on Vercel. This is the perfect place to import the openai npm package, forward the user's messages, and return GPT's response. For chat interfaces you almost always want streaming — it makes the app feel instant and responsive. The Vercel AI SDK (the ai package) wraps OpenAI's streaming API and gives you a React hook called useChat that wires everything together in a few lines.

This tutorial walks you from a blank V0 project to a fully deployed, streaming GPT chat app. You will generate the UI in V0, add the API route, configure your key in Vercel, and ship. The same pattern works for any GPT use case: chatbots, writing assistants, email generators, code explainers, and more.

Integration method

Next.js API Route

V0 generates the React chat UI component. You then add a Next.js API route that calls the OpenAI API server-side, keeping your API key secure. The Vercel AI SDK handles streaming so responses appear word-by-word in the browser.

Prerequisites

  • A V0 account at v0.dev — free tier works for this tutorial
  • An OpenAI platform account at platform.openai.com with a funded API key
  • A Vercel account (free) — needed to set environment variables and deploy
  • Basic familiarity with copy-pasting code into V0's editor panel
  • Your OPENAI_API_KEY copied from platform.openai.com/api-keys

Step-by-step guide

1

Generate the Chat UI in V0

Open V0 at v0.dev and start a new project. In the chat panel, describe the chat interface you want. Be specific about layout: a scrollable message list, distinct styling for user vs assistant messages, a text input that expands as you type, and a send button. V0 generates a React component using Tailwind CSS and shadcn/ui out of the box. Once you see the preview, use V0's Design Mode (Option+D) to tweak colors, spacing, and typography without spending any credits. If the generated component doesn't quite match your vision, iterate by sending follow-up prompts like 'make the assistant messages have a light blue background' or 'add a loading spinner while waiting for a response'. When you're happy with the UI, open the Code panel and verify the component is calling a fetch or using the useChat hook from the ai package. If it isn't wired up to an API yet, that's fine — you'll connect it in a later step. The key goal of this step is to nail the visual design before touching backend code.

V0 Prompt

Create a full-screen chat interface with a scrollable message list, user messages aligned right with a blue background, assistant messages aligned left with a gray background, a sticky text input at the bottom with a send button, and a 'Thinking...' indicator that shows while waiting for a response. The component should manage messages in local state.

Paste this in V0 chat

Pro tip: Use V0's Design Mode (Option+D) to adjust colors and spacing for free — no credits consumed for visual tweaks.

Expected result: A polished chat UI renders in the V0 preview with styled message bubbles, a text input, and a send button. No API calls are wired up yet.

2

Install Dependencies and Create the API Route

Switch to V0's code editor (the Code panel on the right, or Dev Mode if you're on a paid plan) and open the package.json file. You need two packages: openai for the OpenAI SDK and ai for the Vercel AI SDK's streaming utilities. Add both to your dependencies. Next, create a new file at app/api/chat/route.ts. This file will export a POST handler — a serverless function that Vercel runs every time the frontend sends a message. The handler receives the conversation history as a JSON array of messages, each with a role ('user' or 'assistant') and a content string. It creates an OpenAI client using process.env.OPENAI_API_KEY (which you'll set in Vercel shortly), calls chat.completions.create with stream: true, and pipes the stream back to the browser using the Vercel AI SDK's streamText helper. The streamText approach means the browser starts rendering tokens before GPT finishes generating the full response — exactly the experience users expect from modern AI apps. Include a model name (gpt-4o is recommended for quality, gpt-4o-mini for speed and cost), a system prompt describing your app's purpose, and the messages array from the request body.

V0 Prompt

Update the chat component to use the useChat hook from the 'ai/react' package. The hook should call POST /api/chat. Replace the local messages state with the messages and input values from useChat, and wire the form's onSubmit to handleSubmit from useChat.

Paste this in V0 chat

app/api/chat/route.ts
1import { openai } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4export const runtime = 'edge';
5
6export async function POST(req: Request) {
7 const { messages } = await req.json();
8
9 const result = await streamText({
10 model: openai('gpt-4o'),
11 system: 'You are a helpful assistant. Be concise and friendly.',
12 messages,
13 });
14
15 return result.toDataStreamResponse();
16}

Pro tip: Setting export const runtime = 'edge' makes this route run on Vercel's Edge Network — faster cold starts and global distribution.

Expected result: The file app/api/chat/route.ts exists with a POST handler that imports from @ai-sdk/openai and ai. The package.json includes openai and ai as dependencies.

3

Add Your OpenAI API Key to Vercel

Your API key must never appear in client-side JavaScript — it would be visible to anyone who opens the browser's network tab. The correct place to store it is Vercel's environment variables, which are injected at build time into your serverless functions but never sent to the browser. To set this up, push your V0 project to GitHub (use V0's Git panel to connect and push), then open your project in the Vercel Dashboard at vercel.com/dashboard. Navigate to your project, then go to Settings → Environment Variables. Click Add New, set the key name to OPENAI_API_KEY, paste your key from platform.openai.com/api-keys as the value, and make sure all three environments are checked (Production, Preview, Development). Click Save. Note that if you add or change environment variables after the last deploy, you need to re-deploy for the changes to take effect — Vercel will prompt you to do this. On your local machine you can also create a .env.local file (never commit this to git) with OPENAI_API_KEY=sk-... for running the dev server locally with next dev.

.env.local
1# .env.local (never commit this file)
2OPENAI_API_KEY=sk-your-key-here

Pro tip: Add .env.local to your .gitignore immediately. If you accidentally push a key to GitHub, revoke it on platform.openai.com/api-keys right away.

Expected result: OPENAI_API_KEY is set in Vercel Dashboard under Settings → Environment Variables. The .env.local file exists locally for development but is listed in .gitignore.

4

Wire the Frontend to the API Route

Now connect the chat UI component you generated in Step 1 to the API route you created in Step 2. The cleanest approach is the useChat hook from the Vercel AI SDK's ai/react package. This hook manages the messages array, the current input value, and the loading state automatically. It sends a POST request to /api/chat (or any path you specify) whenever the user submits the form. The messages array from useChat is already formatted correctly for OpenAI — each item has a role and content field. The hook also handles streaming: as tokens arrive from the API, it updates the last assistant message character by character, triggering React re-renders that feel like live typing. In your chat component, replace any manual fetch calls or local state with the useChat hook. Pass messages to your message list renderer and bind input and handleSubmit to the form. Add a conditional rendering block that shows a subtle spinner or 'Assistant is typing...' indicator when isLoading from useChat is true. Test the flow in V0's preview — you should see the input, the submit, and (once deployed with the real key) streaming responses. For the preview to work end-to-end you'll need to deploy first since the preview environment doesn't have access to Vercel environment variables.

V0 Prompt

Refactor the chat component to import { useChat } from 'ai/react'. Use the messages, input, handleInputChange, handleSubmit, and isLoading values from useChat({ api: '/api/chat' }). Show a pulsing dot animation when isLoading is true. Remove all manual fetch logic and useState for messages.

Paste this in V0 chat

app/components/Chat.tsx
1'use client';
2
3import { useChat } from 'ai/react';
4
5export default function Chat() {
6 const { messages, input, handleInputChange, handleSubmit, isLoading } =
7 useChat({ api: '/api/chat' });
8
9 return (
10 <div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
11 <div className="flex-1 overflow-y-auto space-y-4 mb-4">
12 {messages.map((m) => (
13 <div
14 key={m.id}
15 className={`flex ${
16 m.role === 'user' ? 'justify-end' : 'justify-start'
17 }`}
18 >
19 <div
20 className={`rounded-lg px-4 py-2 max-w-[80%] ${
21 m.role === 'user'
22 ? 'bg-blue-500 text-white'
23 : 'bg-gray-100 text-gray-900'
24 }`}
25 >
26 {m.content}
27 </div>
28 </div>
29 ))}
30 {isLoading && (
31 <div className="flex justify-start">
32 <div className="bg-gray-100 rounded-lg px-4 py-2 text-gray-500">
33 Thinking...
34 </div>
35 </div>
36 )}
37 </div>
38
39 <form onSubmit={handleSubmit} className="flex gap-2">
40 <input
41 value={input}
42 onChange={handleInputChange}
43 placeholder="Type a message..."
44 className="flex-1 border rounded-lg px-4 py-2 focus:outline-none focus:ring-2 focus:ring-blue-500"
45 />
46 <button
47 type="submit"
48 disabled={isLoading}
49 className="bg-blue-500 text-white px-4 py-2 rounded-lg disabled:opacity-50"
50 >
51 Send
52 </button>
53 </form>
54 </div>
55 );
56}

Pro tip: The useChat hook from ai/react automatically batches UI updates during streaming, keeping the component performant even as tokens arrive rapidly.

Expected result: The chat component uses useChat, displays a message list, and shows a loading state. When the app is deployed, sending a message calls the API route and streams back GPT's response in real time.

5

Deploy to Vercel and Test

With the API route and environment variable in place, it's time to deploy. If you connected V0 to GitHub in the previous step, Vercel will automatically trigger a new deployment when you push code. You can also trigger a manual deployment from the Vercel Dashboard by clicking the Redeploy button. Wait for the deployment to finish (usually 30–60 seconds for a Next.js app). Once it shows a green checkmark, open the production URL and test the chat interface end-to-end. Type a message and watch the streaming response appear token-by-token. Check the Vercel Dashboard → Functions tab to see your API route listed as a serverless function — you can view invocation logs here if something goes wrong. Common first-deploy issues include a missing environment variable (the API returns a 500 with 'API key not found'), a wrong model name, or a package that wasn't added to package.json. The Vercel deployment logs (Dashboard → Deployments → click your deploy → View Logs) will show exactly which error occurred. If the chat works but feels slow, consider switching to gpt-4o-mini for lower latency, or verify that export const runtime = 'edge' is set in your API route for faster cold starts. For production apps with heavy usage, RapidDev can help you add rate limiting, conversation persistence with a database, and usage-based billing.

Pro tip: Use the Vercel Functions tab in your project dashboard to monitor real-time invocations and debug errors after deployment.

Expected result: The deployed app streams GPT responses in the browser. The Vercel Functions tab shows the /api/chat route being invoked. The OPENAI_API_KEY is used server-side and never exposed in the browser.

Common use cases

Customer Support Chatbot

Build a chat widget that answers product questions using a system prompt containing your documentation. Users get instant AI-powered answers without leaving your site, reducing support tickets significantly.

V0 Prompt

Create a full-screen chat UI with a message list on the left, a text input at the bottom, and a send button. Messages should show a user avatar and an AI avatar. The component should call POST /api/chat with { messages } and stream the response.

Copy this prompt to try it in V0

AI Content Generator

Let users fill in a form — blog topic, tone, target audience — and generate a full blog post draft. GPT-4o produces high-quality long-form content in seconds, which the user can copy or edit inline.

V0 Prompt

Build a blog post generator form with fields for topic, tone (dropdown: professional, casual, witty), and target audience. Add a Generate button that sends the form data to POST /api/generate and displays the streaming result in a styled output box.

Copy this prompt to try it in V0

Document Q&A Assistant

Allow users to paste in a block of text (a contract, article, or report) and then ask questions about it. The API route includes the document in the system context so GPT answers only from the provided content.

V0 Prompt

Create a two-panel layout: left panel has a textarea for pasting a document, right panel has a chat interface. Submitting a question sends both the document text and conversation history to POST /api/qa and streams the answer.

Copy this prompt to try it in V0

Troubleshooting

API route returns 500: 'OpenAI API key not found' or 'Invalid API key'

Cause: The OPENAI_API_KEY environment variable is not set in Vercel, or the app was deployed before the variable was added.

Solution: Go to Vercel Dashboard → your project → Settings → Environment Variables. Confirm OPENAI_API_KEY is present. If you added it after the last deploy, go to Deployments and click Redeploy on the latest deployment.

Responses appear all at once instead of streaming word-by-word

Cause: The frontend is using a regular fetch/await instead of the useChat hook, or the API route is not returning a streaming response.

Solution: Ensure the API route uses streamText from the ai package and returns result.toDataStreamResponse(). On the frontend, use the useChat hook from ai/react rather than a manual fetch call.

typescript
1// API route — must return streaming response
2const result = await streamText({ model: openai('gpt-4o'), messages });
3return result.toDataStreamResponse();

Module not found: Can't resolve '@ai-sdk/openai' or 'ai'

Cause: The ai and/or @ai-sdk/openai packages are missing from package.json.

Solution: In V0's code editor, open package.json and add both packages to the dependencies object. Vercel installs them automatically on next deploy.

typescript
1// package.json dependencies
2"ai": "^3.0.0",
3"@ai-sdk/openai": "^0.0.40",

Chat works in V0 preview but fails after deploying to Vercel

Cause: The V0 preview sandbox uses mock data or the environment variable isn't set for the deployment environment (Production vs Preview).

Solution: In Vercel → Settings → Environment Variables, make sure OPENAI_API_KEY is checked for all three environments: Production, Preview, and Development. Then redeploy.

Best practices

  • Always call the OpenAI API from a server-side API route — never from client-side React code where the key would be exposed
  • Use the Vercel AI SDK's streamText and useChat instead of manual fetch calls for a better developer and user experience
  • Set export const runtime = 'edge' in your API route for faster cold starts on Vercel's global edge network
  • Include a well-crafted system prompt that defines the AI's persona, tone, and constraints for your specific use case
  • Add rate limiting to your API route (e.g., using Vercel KV or Upstash Redis) before going to production to prevent abuse
  • Monitor your OpenAI usage on platform.openai.com and set a monthly spending limit to avoid surprise bills
  • Store conversation history in a database (like Supabase or Vercel Postgres) if users need persistent chat sessions across page refreshes
  • Use gpt-4o-mini for high-volume features and gpt-4o for quality-sensitive tasks to balance cost and performance

Alternatives

Frequently asked questions

Does V0 have built-in OpenAI support?

V0 can generate UI components that look like AI chat interfaces, but it doesn't include an OpenAI integration out of the box. You need to add a Next.js API route and the openai npm package yourself, which this tutorial covers. V0's own AI generation uses its own models internally.

Which OpenAI model should I use with V0?

For most chat applications, gpt-4o is the best balance of capability and cost. If you need faster responses or have high volume, gpt-4o-mini is significantly cheaper and still very capable. For older integrations you might see gpt-3.5-turbo, but gpt-4o-mini now outperforms it at a similar price.

Is it safe to use my OpenAI API key in a V0 project?

Yes, as long as you follow the pattern in this tutorial. Store the key in Vercel's environment variables (Settings → Environment Variables), not in your source code. Only use it inside server-side API routes — never in React components or any file that doesn't have export const runtime = 'edge' or runs on the server.

Can I use the OpenAI Assistants API instead of chat completions?

Yes. The Assistants API supports file uploads, code execution, and persistent threads. You would call it from the same Next.js API route, but the code would use openai.beta.assistants and openai.beta.threads instead of chat.completions. The Vercel AI SDK does not yet have native streaming helpers for Assistants, so you'd manage the polling loop manually.

How do I add a system prompt to customize the AI's behavior?

In your API route, add a system property to the streamText call: system: 'You are a customer support agent for Acme Corp. Only answer questions about our products.' The system message is never visible to the user and defines the AI's persona and constraints for every conversation.

Why are my API calls slow on the first request?

Next.js serverless functions have a cold start delay of 1–3 seconds on the first invocation after a period of inactivity. Adding export const runtime = 'edge' to your API route switches to Vercel's Edge Runtime, which has near-zero cold start times and runs your function closer to the user geographically.

How much does it cost to run a GPT chat app on Vercel?

Vercel's free Hobby plan includes enough serverless function invocations for development and low-traffic apps. OpenAI charges per token: gpt-4o is roughly $5 per million input tokens and $15 per million output tokens as of early 2026. For a typical chat app with moderate usage, expect $5–20/month in OpenAI costs plus Vercel hosting, which starts free.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.