Skip to main content
RapidDev - Software Development Agency
bolt-ai-integrationsBolt Chat + API Route

How to Integrate Bolt.new with Google Cloud AI Platform

Integrate Google Cloud AI Platform (Vertex AI) with Bolt.new by calling prediction endpoints and pre-built AI APIs through a Next.js API route. Google Cloud APIs are all HTTP-based and work in Bolt's WebContainer when proxied server-side. Your service account credentials stay in .env — never in client code. Outbound calls to Vertex AI endpoints work in development; no incoming webhooks are needed for prediction workflows.

What you'll learn

  • How to create a Google Cloud service account and get credentials for Vertex AI API calls
  • How to implement service account authentication in a Next.js API route without the Google Cloud SDK
  • How to call Vertex AI prediction endpoints for custom model inference from a Bolt app
  • How to use Google Cloud pre-built AI APIs (Natural Language, Vision) via HTTP from server-side routes
  • How to build a prediction UI in React that sends data to your API route and displays model results
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate17 min read45 minutesAI/MLApril 2026RapidDev Engineering Team
TL;DR

Integrate Google Cloud AI Platform (Vertex AI) with Bolt.new by calling prediction endpoints and pre-built AI APIs through a Next.js API route. Google Cloud APIs are all HTTP-based and work in Bolt's WebContainer when proxied server-side. Your service account credentials stay in .env — never in client code. Outbound calls to Vertex AI endpoints work in development; no incoming webhooks are needed for prediction workflows.

Add AI Predictions and Google Cloud ML to Bolt.new

Google Cloud AI Platform — rebranded as Vertex AI in 2021 — is Google's unified ML platform covering model training, deployment, and serving alongside pre-built AI APIs for vision, language, and structured data. All of Vertex AI's capabilities are accessible over HTTPS, which makes them fully compatible with Bolt's WebContainer architecture. Your server-side Next.js API routes call Google Cloud endpoints, and React components display predictions, classification results, or natural language analysis in the browser.

There are two distinct use cases for Vertex AI in Bolt.new projects. The first is calling Google's pre-built AI APIs — Natural Language Analysis, Vision AI (image labeling, OCR, object detection), Translation, Video AI, and Document AI. These require no model training, have generous free tiers, and respond quickly. They're the right choice for adding AI capabilities to an app without the infrastructure work of training your own model. The second use case is calling endpoints for models you've already deployed to Vertex AI — custom-trained models where you provide feature inputs and receive predictions back. Bolt generates the UI and API route; your model was built and deployed externally.

Authentication is the primary complexity of Google Cloud integrations. Unlike services with simple API keys, Google Cloud uses short-lived OAuth 2.0 access tokens issued by exchanging a signed JWT from your service account credentials. This happens entirely server-side in your API route — the google-auth-library npm package handles this automatically, or you can implement it manually using the jsonwebtoken package. The service account's JSON key file contains everything needed and must be stored as environment variable strings (not an actual file) in a Bolt.new project.

Integration method

Bolt Chat + API Route

Vertex AI and Google Cloud AI APIs are all HTTP-based REST and gRPC-transcoded endpoints. In Bolt.new, you authenticate with a Google Cloud service account and call prediction endpoints through a Next.js API route that obtains an access token using Google's OAuth 2.0 service account flow. Your service account JSON key stays server-side in environment variables — client code calls your own /api/predict route, which proxies to Google Cloud and returns the prediction result.

Prerequisites

  • A Google Cloud project with billing enabled and the Vertex AI API enabled (console.cloud.google.com → APIs & Services → Enable APIs)
  • A service account created with the 'Vertex AI User' role (IAM → Service Accounts → Create Service Account)
  • A JSON key downloaded for the service account (IAM → Service Accounts → Keys → Add Key → JSON)
  • For pre-built APIs (Vision, Natural Language): the respective API enabled and an API key or service account with the appropriate role
  • A Bolt.new project using Next.js (request Next.js explicitly for API route support)

Step-by-step guide

1

Create a Service Account and Configure Credentials

Google Cloud authentication for server-to-server API calls uses service accounts — non-human identities with their own credentials and IAM permissions. In the Google Cloud Console, navigate to IAM & Admin → Service Accounts → Create Service Account. Name it something descriptive like 'bolt-vertex-ai'. Grant it the 'Vertex AI User' role (roles/aiplatform.user) for Vertex AI access, and add 'Cloud Vision API User' or 'Cloud Natural Language API User' roles if you need those pre-built APIs. Click Done. Now create a JSON key: click on your new service account → Keys tab → Add Key → Create new key → JSON → Create. The browser downloads a JSON file containing your project ID, private key ID, private key (RSA), and client email. This JSON file is the most sensitive credential in your project — treat it like a password. In Bolt.new, you cannot store a file in .env, but you can store the JSON's individual fields. Open the downloaded JSON and copy these values into your .env file: GOOGLE_PROJECT_ID (from project_id field), GOOGLE_SERVICE_ACCOUNT_EMAIL (from client_email field), GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY (from private_key field — it's a multi-line PEM string). When storing the private key, the newlines in the PEM string are represented as \n in the JSON — keep them as \n in your .env file or replace with actual newlines. Your API route will parse them back when constructing the JWT. Never add these values with NEXT_PUBLIC_ or VITE_ prefix — service account credentials must only be accessed server-side.

Bolt.new Prompt

Set up Google Cloud service account authentication in my Bolt project. Create a .env file with GOOGLE_PROJECT_ID, GOOGLE_SERVICE_ACCOUNT_EMAIL, and GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY as placeholder variables. Install the google-auth-library npm package. Create a lib/google-auth.ts utility that exports a getGoogleAccessToken() async function. This function should use the GoogleAuth class from google-auth-library with credentials built from the three environment variables to get an access token scoped to https://www.googleapis.com/auth/cloud-platform. The function should return the access token string.

Paste this in Bolt.new chat

lib/google-auth.ts
1// lib/google-auth.ts
2import { GoogleAuth } from 'google-auth-library';
3
4let cachedToken: { token: string; expiry: number } | null = null;
5
6export async function getGoogleAccessToken(): Promise<string> {
7 // Return cached token if still valid (with 5-minute buffer)
8 if (cachedToken && Date.now() < cachedToken.expiry - 5 * 60 * 1000) {
9 return cachedToken.token;
10 }
11
12 const auth = new GoogleAuth({
13 credentials: {
14 type: 'service_account',
15 project_id: process.env.GOOGLE_PROJECT_ID,
16 client_email: process.env.GOOGLE_SERVICE_ACCOUNT_EMAIL,
17 // Replace escaped \n with actual newlines for PEM format
18 private_key: process.env.GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY?.replace(/\\n/g, '\n'),
19 },
20 scopes: ['https://www.googleapis.com/auth/cloud-platform'],
21 });
22
23 const client = await auth.getClient();
24 const tokenResponse = await client.getAccessToken();
25
26 if (!tokenResponse.token) {
27 throw new Error('Failed to get Google access token');
28 }
29
30 // Cache token (Google access tokens expire in 1 hour)
31 cachedToken = {
32 token: tokenResponse.token,
33 expiry: Date.now() + 60 * 60 * 1000,
34 };
35
36 return tokenResponse.token;
37}

Pro tip: Google Cloud access tokens expire after 1 hour. The caching logic in the helper above prevents redundant token refresh calls on each API request. For production, consider using a more robust caching layer if your app handles high request volume.

Expected result: The getGoogleAccessToken() function obtains a valid Google Cloud OAuth 2.0 access token using your service account credentials. It can be called from any server-side API route.

2

Call Vertex AI Online Prediction Endpoints

Vertex AI online prediction endpoints accept JSON input matching your model's feature schema and return predictions synchronously — typically within 100-500ms for most model types. The endpoint URL pattern is: https://{region}-aiplatform.googleapis.com/v1/projects/{projectId}/locations/{region}/endpoints/{endpointId}:predict. Your region is where you deployed the model (us-central1, europe-west4, etc.). The endpointId is the numeric ID visible in the Vertex AI Console under Endpoints. The request body follows the format: { instances: [{ feature1: value1, feature2: value2, ... }] } where each object in instances is one prediction request. You can batch multiple predictions in a single request by including multiple objects in the instances array. The response is { predictions: [...], deployedModelId: '...' } where predictions contains one result per input instance. The structure of each prediction depends on your model type: classification models return class probabilities; regression models return numeric values; custom-trained models return whatever your model's serving function returns. For models trained with AutoML Tabular, the prediction response includes feature importance scores. Build your React form to match the exact feature names and types your model expects — these are documented in your model's feature store or training dataset schema.

Bolt.new Prompt

Create a Vertex AI prediction API route at app/api/predict/route.ts. The route should accept POST requests with a features object in the body. Use getGoogleAccessToken() from lib/google-auth.ts to get a token. Call POST https://{VERTEX_AI_REGION}-aiplatform.googleapis.com/v1/projects/{GOOGLE_PROJECT_ID}/locations/{VERTEX_AI_REGION}/endpoints/{VERTEX_AI_ENDPOINT_ID}:predict with Authorization: Bearer token and instances: [features] as the body. Return the first prediction from the response. Use VERTEX_AI_REGION and VERTEX_AI_ENDPOINT_ID from .env. Build a PredictionForm React component with input fields, a Submit button, and a results section showing the prediction value and confidence.

Paste this in Bolt.new chat

app/api/predict/route.ts
1// app/api/predict/route.ts
2import { NextRequest, NextResponse } from 'next/server';
3import { getGoogleAccessToken } from '@/lib/google-auth';
4
5export async function POST(request: NextRequest) {
6 const { features } = await request.json();
7
8 const projectId = process.env.GOOGLE_PROJECT_ID;
9 const region = process.env.VERTEX_AI_REGION ?? 'us-central1';
10 const endpointId = process.env.VERTEX_AI_ENDPOINT_ID;
11
12 if (!endpointId) {
13 return NextResponse.json({ error: 'VERTEX_AI_ENDPOINT_ID not configured' }, { status: 500 });
14 }
15
16 try {
17 const accessToken = await getGoogleAccessToken();
18
19 const endpoint = `https://${region}-aiplatform.googleapis.com/v1/projects/${projectId}/locations/${region}/endpoints/${endpointId}:predict`;
20
21 const response = await fetch(endpoint, {
22 method: 'POST',
23 headers: {
24 Authorization: `Bearer ${accessToken}`,
25 'Content-Type': 'application/json',
26 },
27 body: JSON.stringify({
28 instances: [features],
29 }),
30 });
31
32 if (!response.ok) {
33 const error = await response.text();
34 return NextResponse.json({ error: `Vertex AI error: ${error}` }, { status: response.status });
35 }
36
37 const result = await response.json();
38 return NextResponse.json({
39 prediction: result.predictions?.[0],
40 deployedModelId: result.deployedModelId,
41 });
42 } catch (error) {
43 return NextResponse.json({ error: 'Prediction failed' }, { status: 500 });
44 }
45}

Pro tip: If your Vertex AI model is in a different region than us-central1, update VERTEX_AI_REGION to match — using the wrong region returns a 404 error even with correct credentials. The region must match where the endpoint was deployed in the Vertex AI Console.

Expected result: Submitting features through the React form sends a POST to /api/predict, which calls your Vertex AI endpoint and returns the model's prediction. The UI displays the prediction value and confidence scores.

3

Use Google Cloud Pre-built AI APIs

Google Cloud's pre-built AI APIs — Vision AI, Natural Language API, Translation AI, Video Intelligence, and Document AI — provide powerful ML capabilities without training any models. These are particularly well-suited for Bolt.new projects because they have free tiers (1,000 Vision API calls/month, 5,000 Natural Language API calls/month), respond quickly, and require only API key authentication (no service account JWT flow needed for basic usage). You can authenticate these APIs with either a simple API key (easier) or a service account (more secure for production). For an API key, go to Google Cloud Console → APIs & Services → Credentials → Create Credentials → API Key. Restrict the key to the specific APIs you'll use (Vision API, Natural Language API, etc.) under 'Restrict key → API restrictions'. Store the key as GOOGLE_API_KEY in .env (server-side only). API requests use the key as a query parameter: ?key=YOUR_API_KEY. For the Vision API, you POST base64-encoded image data to the annotate endpoint with a list of feature types (LABEL_DETECTION, TEXT_DETECTION, OBJECT_LOCALIZATION, FACE_DETECTION, SAFE_SEARCH_DETECTION). For the Natural Language API, you POST a document object with the text and language, specifying an analysis type (analyzeSentiment, analyzeEntities, analyzeSyntax). Both return rich JSON responses that your React component can display as structured results, charts, or highlighted text.

Bolt.new Prompt

Create a Google Cloud Vision API route at app/api/vision/route.ts that accepts a base64-encoded image string in the request body and calls https://vision.googleapis.com/v1/images:annotate?key=GOOGLE_API_KEY with LABEL_DETECTION and TEXT_DETECTION features. Return the top 5 labels with confidence scores and any detected text. Also create app/api/sentiment/route.ts that accepts a text string and calls https://language.googleapis.com/v1/documents:analyzeSentiment?key=GOOGLE_API_KEY. Return the document sentiment score (-1 to 1), magnitude, and per-sentence sentiments. Build a demo page with two sections: one for image analysis (file upload) and one for text sentiment (textarea input).

Paste this in Bolt.new chat

app/api/vision/route.ts
1// app/api/vision/route.ts
2import { NextRequest, NextResponse } from 'next/server';
3
4export async function POST(request: NextRequest) {
5 const { imageBase64 } = await request.json();
6 const apiKey = process.env.GOOGLE_API_KEY;
7
8 try {
9 const response = await fetch(
10 `https://vision.googleapis.com/v1/images:annotate?key=${apiKey}`,
11 {
12 method: 'POST',
13 headers: { 'Content-Type': 'application/json' },
14 body: JSON.stringify({
15 requests: [{
16 image: { content: imageBase64 },
17 features: [
18 { type: 'LABEL_DETECTION', maxResults: 5 },
19 { type: 'TEXT_DETECTION' },
20 { type: 'SAFE_SEARCH_DETECTION' },
21 ],
22 }],
23 }),
24 }
25 );
26
27 if (!response.ok) {
28 const error = await response.json();
29 return NextResponse.json({ error: error.error?.message }, { status: response.status });
30 }
31
32 const data = await response.json();
33 const annotations = data.responses?.[0];
34
35 return NextResponse.json({
36 labels: annotations?.labelAnnotations ?? [],
37 text: annotations?.fullTextAnnotation?.text ?? '',
38 safeSearch: annotations?.safeSearchAnnotation ?? {},
39 });
40 } catch (error) {
41 return NextResponse.json({ error: 'Vision API call failed' }, { status: 500 });
42 }
43}

Pro tip: The Google Cloud Vision API accepts images as base64-encoded strings (for images up to 10MB) or as Google Cloud Storage URIs (gs://bucket/image.jpg). For files larger than 10MB, use GCS URIs. Base64 encoding a 2MB JPEG adds about 2.7MB to your request payload — consider resizing images client-side before sending.

Expected result: Uploading an image through the React component sends it to /api/vision, which returns labels, detected text, and safe search classifications from Google's Vision AI.

4

Build the Prediction UI and Handle Errors Gracefully

With the API routes in place, the final step is building the React UI that connects user inputs to your prediction and AI analysis endpoints. A good prediction UI provides clear input forms that match your model's feature expectations, a loading state while waiting for Google Cloud responses (Vertex AI predictions typically take 200ms-2s; Vision API takes 500ms-3s), results displayed in a way that's meaningful to non-technical users, and error messages that help users understand what went wrong. For Vertex AI predictions, transform the raw prediction output into human-readable format — a binary classification model returning [0.23, 0.77] should display 'High Risk (77% confidence)' not the raw array. For Vision API labels, display a confidence bar chart rather than raw decimal scores. React's useState and useEffect are sufficient for this interaction pattern — no state management library is needed. Error handling matters: Google Cloud returns 429 (quota exceeded), 401 (authentication failed), 404 (endpoint not found), and 500 (model error) status codes. Each needs a different user-facing message. The CORS situation for these API calls is straightforward: since your React component calls your own /api/ routes (not Google Cloud directly), there are no CORS issues even in Bolt's WebContainer preview. All Google Cloud calls are proxied server-side.

Bolt.new Prompt

Build a complete PredictionDashboard React component in components/PredictionDashboard.tsx. It should have a form with inputs for the features my Vertex AI model expects (customize with placeholder feature names: feature1 as a number slider 0-100, feature2 as a select with options A/B/C, feature3 as a number). On submit, POST to /api/predict and display results in a results card showing: prediction value prominently, a confidence bar (0-100%), and a human-readable interpretation ('Low Risk / Medium Risk / High Risk' based on the score). Show a loading skeleton during prediction. Show user-friendly error messages for API errors. Also add an ImageAnalyzer section that lets users upload an image and displays Vision API results as label tags with confidence percentages.

Paste this in Bolt.new chat

components/PredictionDashboard.tsx
1// components/PredictionDashboard.tsx
2'use client';
3import { useState } from 'react';
4
5interface PredictionResult {
6 prediction: number[] | number;
7 deployedModelId?: string;
8}
9
10export function PredictionDashboard() {
11 const [features, setFeatures] = useState({ feature1: 50, feature2: 'A', feature3: 25 });
12 const [result, setResult] = useState<PredictionResult | null>(null);
13 const [loading, setLoading] = useState(false);
14 const [error, setError] = useState<string | null>(null);
15
16 const handlePredict = async () => {
17 setLoading(true);
18 setError(null);
19 try {
20 const response = await fetch('/api/predict', {
21 method: 'POST',
22 headers: { 'Content-Type': 'application/json' },
23 body: JSON.stringify({ features }),
24 });
25
26 if (!response.ok) {
27 const err = await response.json();
28 throw new Error(err.error ?? `API error ${response.status}`);
29 }
30
31 const data = await response.json();
32 setResult(data);
33 } catch (err) {
34 setError(err instanceof Error ? err.message : 'Prediction failed');
35 } finally {
36 setLoading(false);
37 }
38 };
39
40 const getConfidence = () => {
41 if (!result?.prediction) return 0;
42 const pred = Array.isArray(result.prediction) ? result.prediction[1] : result.prediction;
43 return Math.round((pred as number) * 100);
44 };
45
46 const getRiskLabel = (confidence: number) =>
47 confidence >= 70 ? 'High Risk' : confidence >= 40 ? 'Medium Risk' : 'Low Risk';
48
49 return (
50 <div className="max-w-xl mx-auto p-6 space-y-6">
51 <h2 className="text-2xl font-bold">Model Prediction</h2>
52
53 <div className="space-y-4">
54 <div>
55 <label className="block text-sm font-medium mb-1">Feature 1: {features.feature1}</label>
56 <input type="range" min="0" max="100" value={features.feature1}
57 onChange={(e) => setFeatures((f) => ({ ...f, feature1: Number(e.target.value) }))}
58 className="w-full" />
59 </div>
60 <div>
61 <label className="block text-sm font-medium mb-1">Feature 2</label>
62 <select value={features.feature2}
63 onChange={(e) => setFeatures((f) => ({ ...f, feature2: e.target.value }))}
64 className="w-full border rounded px-3 py-2">
65 {['A', 'B', 'C'].map((opt) => <option key={opt}>{opt}</option>)}
66 </select>
67 </div>
68 <button onClick={handlePredict} disabled={loading}
69 className="w-full bg-blue-600 text-white py-2 rounded hover:bg-blue-700 disabled:opacity-50">
70 {loading ? 'Predicting...' : 'Run Prediction'}
71 </button>
72 </div>
73
74 {error && <p className="text-red-600 text-sm">{error}</p>}
75
76 {result && (
77 <div className="border rounded-lg p-4 bg-gray-50 space-y-2">
78 <p className="text-lg font-semibold">{getRiskLabel(getConfidence())}</p>
79 <div className="w-full bg-gray-200 rounded-full h-4">
80 <div className="bg-blue-600 h-4 rounded-full transition-all"
81 style={{ width: `${getConfidence()}%` }} />
82 </div>
83 <p className="text-sm text-gray-600">{getConfidence()}% confidence</p>
84 </div>
85 )}
86 </div>
87 );
88}

Pro tip: Vertex AI prediction calls from the WebContainer preview work fine — your Next.js API route makes the outbound call to Google Cloud, and the result returns to the browser. No deployment is needed to test predictions during development.

Expected result: The PredictionDashboard renders an interactive form. Adjusting sliders and dropdowns and clicking 'Run Prediction' calls /api/predict, displays the prediction result, confidence bar, and risk classification without any page reload.

Common use cases

Image Analysis and Object Detection Dashboard

Build an image analysis tool that uses Google Cloud Vision AI to label objects, detect text (OCR), identify faces, and check for safe search violations. Users upload images through a React drag-and-drop interface, and the app returns structured analysis results within seconds without any custom model training.

Bolt.new Prompt

Build an image analysis tool using the Google Cloud Vision API. Create a Next.js API route at app/api/analyze-image/route.ts that accepts a base64-encoded image in the request body and calls the Google Cloud Vision API at https://vision.googleapis.com/v1/images:annotate with the LABEL_DETECTION and TEXT_DETECTION features. Authenticate using GOOGLE_ACCESS_TOKEN from environment. Return the labels with confidence scores and any detected text. Build a React page with a file upload area that base64-encodes the image, sends it to /api/analyze-image, and displays results in a structured card layout.

Copy this prompt to try it in Bolt.new

Custom ML Model Prediction Interface

Build a prediction UI for a Vertex AI custom model endpoint. Users input feature values through a form (numeric sliders, dropdowns, or text fields based on the model's expected inputs), submit the form, and see the model's prediction with confidence scores displayed in a results panel. Useful for fraud detection models, churn prediction, price forecasting, or any tabular ML model deployed on Vertex AI.

Bolt.new Prompt

Build a prediction interface for my Vertex AI model endpoint. Create a Next.js API route at app/api/predict/route.ts that accepts feature inputs as JSON and calls my Vertex AI online prediction endpoint at https://{region}-aiplatform.googleapis.com/v1/projects/{project}/locations/{region}/endpoints/{endpointId}:predict. Use GOOGLE_SERVICE_ACCOUNT_EMAIL and GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY from .env to get an access token. Build a React PredictionForm with input fields for my model's features (customer_age, account_balance, transaction_count, days_since_last_login) and a submit button that shows the prediction result and confidence score.

Copy this prompt to try it in Bolt.new

Sentiment Analysis for Customer Feedback

Build a customer feedback analyzer using Google Cloud Natural Language API. Paste or type customer reviews, support tickets, or social media mentions to get sentiment scores (positive/negative/neutral), entity extraction, and syntax analysis. Display results with visual sentiment indicators and highlighted key entities.

Bolt.new Prompt

Build a text sentiment analysis tool using the Google Cloud Natural Language API. Create an API route at app/api/analyze-text/route.ts that accepts a text string and calls https://language.googleapis.com/v1/documents:analyzeSentiment and https://language.googleapis.com/v1/documents:analyzeEntities using GOOGLE_NL_API_KEY. Return the document sentiment score and magnitude, and the top 5 entities with their salience scores. Build a React TextAnalyzer component with a textarea for input, an Analyze button, and results showing a sentiment meter (positive/neutral/negative), sentiment magnitude, and a list of extracted entities with their importance.

Copy this prompt to try it in Bolt.new

Troubleshooting

Authentication fails with 'Request had invalid authentication credentials' or 401 Unauthorized

Cause: The service account private key is malformed in the environment variable. When JSON keys are stored in .env, the PEM private key's literal newlines are often escaped as \n strings — if these aren't unescaped before use, the JWT signing fails.

Solution: In your google-auth.ts, ensure the private key is processed with .replace(/\\n/g, '\n') to convert escaped newline strings back to actual newlines before passing to GoogleAuth. Also verify the GOOGLE_SERVICE_ACCOUNT_EMAIL matches the client_email in your service account JSON.

typescript
1// Unescape newlines when reading private key from .env
2private_key: process.env.GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY?.replace(/\\n/g, '\n'),

Vertex AI returns 404 Not Found for the prediction endpoint

Cause: The endpoint URL is incorrect — either the region, project ID, or endpoint ID is wrong. Vertex AI endpoint IDs are long numeric strings (e.g., 1234567890123456789), not names. Using the wrong region is the most common cause.

Solution: Verify your endpoint ID by going to Vertex AI Console → Online Predictions → Endpoints. Click on your endpoint to see its full ID. Confirm the region matches your endpoint's location (shown in the Endpoints table). The endpoint URL must use the region where the endpoint was deployed.

typescript
1// Endpoint URL format — all three parts must match exactly
2// https://{region}-aiplatform.googleapis.com/v1/projects/{projectId}/locations/{region}/endpoints/{endpointId}:predict
3// Example:
4// https://us-central1-aiplatform.googleapis.com/v1/projects/my-project-123/locations/us-central1/endpoints/1234567890123456789:predict

Google Cloud Vision or Natural Language API returns 403 with 'API not enabled' or 'Billing not enabled'

Cause: The specific Google Cloud API (Cloud Vision API, Natural Language API, etc.) has not been enabled in your Google Cloud project, or your project does not have billing enabled. Google Cloud requires billing to be active even to use free-tier API calls.

Solution: Go to Google Cloud Console → APIs & Services → Library and search for the API you need. Click Enable. Also verify billing is enabled for your project in Billing → Overview. API key restrictions may also be blocking the call — check that your API key's API restrictions include the specific API.

Prediction returns empty or unexpected output format that doesn't match the expected structure

Cause: The input features object structure doesn't match what the model was trained with. Vertex AI custom models are strict about input schema — wrong field names, wrong data types (string vs number), or missing required features cause the model to return errors or empty predictions.

Solution: Check your model's input schema in Vertex AI Console → Models → select your model → Model details. The 'Input schema' tab shows the exact field names and types expected. Ensure all required fields are present in your instances array and that numeric fields are sent as numbers (not strings).

typescript
1// Ensure feature types match model schema — numbers as numbers, not strings
2const features = {
3 customer_age: parseInt(formData.age, 10), // number, not string
4 account_balance: parseFloat(formData.balance), // float, not string
5 transaction_count: parseInt(formData.txCount, 10), // integer
6 segment: formData.segment, // string if categorical
7};

Best practices

  • Store Google service account credentials as individual environment variable strings (GOOGLE_SERVICE_ACCOUNT_EMAIL, GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY) — never as a parsed JSON object or file path, and never with NEXT_PUBLIC_ or VITE_ prefix
  • Cache Google access tokens for their full 1-hour validity period — refreshing a token on every API request wastes 200-300ms per call and consumes unnecessary quota
  • Use the minimum required IAM roles for your service account — 'Vertex AI User' for predictions, 'Cloud Vision API User' for Vision — avoid the overly broad 'Editor' or 'Owner' roles
  • For pre-built APIs (Vision, Natural Language), restrict your API key to only the specific APIs it needs and to your server-side origins — this limits damage if the key is ever accidentally exposed
  • Build error handling that distinguishes between quota errors (429 — show 'Service temporarily unavailable'), authentication errors (401 — log and alert), and model errors (500 — show 'Prediction failed, please try again')
  • Test Vertex AI predictions during development in Bolt's WebContainer — outbound API calls work fine, and you'll see real prediction results without deploying
  • Format raw model outputs into human-readable interpretations before displaying — translate probability arrays into risk levels, sentiment scores into positive/neutral/negative labels, and confidence decimals into percentage bars

Alternatives

Frequently asked questions

How do I connect Bolt.new to Google Cloud AI Platform (Vertex AI)?

Create a Google Cloud service account with the Vertex AI User role, download a JSON key, and store the client_email and private_key fields as environment variables in your Bolt .env file. Install google-auth-library and create a helper that exchanges these credentials for a short-lived access token. Use the token in a Next.js API route's Authorization: Bearer header when calling Vertex AI prediction endpoints.

Do Vertex AI API calls work in Bolt's WebContainer during development?

Yes — your Next.js API routes make outbound HTTPS calls to Google Cloud, which works fine in Bolt's WebContainer. The browser sends a request to your local API route, which calls Vertex AI server-side and returns the prediction. You'll see real model predictions during development without needing to deploy first. No incoming webhook setup is needed for synchronous prediction workflows.

Should I use a service account or an API key for Google Cloud AI?

Use an API key for pre-built APIs (Vision, Natural Language, Translation) — it's simpler to set up and Google Cloud's API key restrictions let you limit exposure. Use a service account for Vertex AI custom model endpoints — service accounts support fine-grained IAM roles and are required for Vertex AI's authentication model. Both approaches work identically in Bolt's Next.js API routes.

What is the difference between Google Cloud AI Platform and Vertex AI?

They're the same platform — Google rebranded 'AI Platform' as 'Vertex AI' in May 2021. The old AI Platform Prediction, Training, and Notebooks services were consolidated into Vertex AI's unified interface. If you see documentation referencing 'AI Platform', it applies to Vertex AI in most cases. Use the Vertex AI Console (console.cloud.google.com/vertex-ai) for all model management.

How do I deploy a Bolt.new Vertex AI app to Netlify?

In Bolt, connect Netlify via Settings → Applications and click Publish. After deploying, add your environment variables (GOOGLE_PROJECT_ID, GOOGLE_SERVICE_ACCOUNT_EMAIL, GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY, VERTEX_AI_REGION, VERTEX_AI_ENDPOINT_ID) in Netlify's Site Configuration → Environment Variables. Trigger a redeploy to apply them. When setting GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY in Netlify, paste the PEM key with actual newlines — Netlify preserves multi-line values.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.