Skip to main content
RapidDev - Software Development Agency
lovable-integrationsEdge Function Integration

How to Integrate Lovable with Google Cloud AI Platform

Connect your Lovable app to Google Cloud AI Platform (Vertex AI) by creating a Supabase Edge Function that authenticates with a Google service account, calls Vertex AI prediction endpoints, and returns ML model results to your React frontend. Store your service account JSON credentials in Cloud → Secrets. Use Vertex AI when you need managed ML model hosting and the full Google Cloud ecosystem, rather than running TensorFlow models yourself.

What you'll learn

  • How to create a Google service account and store its JSON key in Cloud → Secrets
  • How to authenticate with Google Cloud APIs using OAuth2 service account flow in a Deno Edge Function
  • How to call a Vertex AI online prediction endpoint from a Supabase Edge Function
  • How to build a React frontend component that sends user data to the prediction endpoint
  • How Vertex AI managed hosting differs from running TensorFlow.js models yourself
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate17 min read45 minutesAI/MLMarch 2026RapidDev Engineering Team
TL;DR

Connect your Lovable app to Google Cloud AI Platform (Vertex AI) by creating a Supabase Edge Function that authenticates with a Google service account, calls Vertex AI prediction endpoints, and returns ML model results to your React frontend. Store your service account JSON credentials in Cloud → Secrets. Use Vertex AI when you need managed ML model hosting and the full Google Cloud ecosystem, rather than running TensorFlow models yourself.

Call Google Vertex AI prediction endpoints from your Lovable app

Google Cloud AI Platform — now consolidated under the Vertex AI brand — provides a fully managed environment for deploying and serving machine learning models. Unlike TensorFlow.js where you run inference code yourself, Vertex AI handles the model serving infrastructure: you upload a trained model, deploy it to an endpoint, and call that endpoint with prediction requests. Google manages scaling, versioning, hardware selection (including GPU accelerators), and uptime. This is the right choice when you have a trained model that needs production-grade serving without managing your own inference server.

Vertex AI prediction endpoints use Google Cloud's standard OAuth2 authentication. Every API request must include a Bearer token obtained by signing a JWT with your service account private key. This authentication flow is more complex than a simple API key — it requires generating a time-limited access token before each request (or caching a recently obtained token). An Edge Function is the right place to implement this flow because the service account JSON key must stay server-side.

Beyond custom model serving, Vertex AI also provides AutoML (automated model training from your data), Model Garden (access to large foundation models including Gemini), and AI APIs for structured data, images, text, and video. The integration pattern covered here — service account auth, REST endpoint call, response proxying — works for all of these Vertex AI products. Use this integration when your ML model is already deployed on Vertex AI, when you need Google Cloud's enterprise SLA and compliance certifications, or when your team is already invested in the Google Cloud ecosystem.

Integration method

Edge Function Integration

Google Vertex AI integrates with Lovable through a Supabase Edge Function that authenticates using a Google service account via OAuth2, calls the Vertex AI prediction REST endpoint, and returns prediction results to your React frontend. The service account JSON key is stored in Cloud → Secrets as GOOGLE_SERVICE_ACCOUNT_JSON and never exposed to the browser. The Edge Function exchanges the service account credentials for a short-lived access token on each request and proxies the prediction call.

Prerequisites

  • A Google Cloud account with a project that has Vertex AI API enabled
  • A trained model deployed to a Vertex AI online prediction endpoint, OR access to Vertex AI's Gemini or AutoML APIs
  • A Google Cloud service account with the 'Vertex AI User' role (roles/aiplatform.user) and a downloaded JSON key file
  • A Lovable account with an active Lovable Cloud project
  • Your Vertex AI endpoint URL (format: https://{region}-aiplatform.googleapis.com/v1/projects/{project}/locations/{region}/endpoints/{endpoint-id}:predict)

Step-by-step guide

1

Create a Google service account and store credentials in Cloud → Secrets

Google Cloud APIs use OAuth2 service accounts for server-to-server authentication. A service account is a special Google account that represents your application rather than a human user. You create it in the Google Cloud Console, grant it the minimum necessary permissions, and download a JSON key file that your Edge Function uses to obtain access tokens. To create the service account, open the Google Cloud Console (console.cloud.google.com) and navigate to IAM & Admin → Service Accounts. Click 'Create Service Account'. Give it a descriptive name like 'lovable-vertex-predictor'. Click 'Create and Continue'. Under 'Grant this service account access to project', add the role 'Vertex AI User' (roles/aiplatform.user). Click 'Continue' then 'Done'. Now create and download the JSON key. Click on the service account you just created. Go to the 'Keys' tab. Click 'Add Key' → 'Create new key' → select 'JSON' → click 'Create'. A JSON file downloads to your computer. This file contains the private key used to sign OAuth2 tokens. Store this JSON key in Lovable. Click the '+' icon next to Preview to open the Cloud panel, then click 'Secrets'. Click 'Add new secret'. In the Name field enter GOOGLE_SERVICE_ACCOUNT_JSON and paste the entire contents of the downloaded JSON file as the value. Also add: - VERTEX_PROJECT_ID — your Google Cloud project ID - VERTEX_LOCATION — the region of your endpoint (e.g., us-central1) - VERTEX_ENDPOINT_ID — your endpoint ID from the Vertex AI console Never share the service account JSON key. It grants the full permissions of the 'Vertex AI User' role to anyone who possesses it. Lovable's security system blocks approximately 1,200 hardcoded credentials daily, but the Secrets panel is the only safe storage location.

Pro tip: Use the principle of least privilege — only grant the service account the specific role it needs (Vertex AI User for predictions). Avoid creating service accounts with Owner or Editor project-level roles.

Expected result: GOOGLE_SERVICE_ACCOUNT_JSON, VERTEX_PROJECT_ID, VERTEX_LOCATION, and VERTEX_ENDPOINT_ID are stored in Cloud → Secrets with masked values. The service account JSON key file is deleted from your computer after uploading.

2

Create the Google OAuth2 token helper

Google Cloud APIs require a Bearer access token with every request. Service account authentication works by signing a JWT (JSON Web Token) with the private key from your service account JSON, sending it to Google's token endpoint, and receiving a short-lived access token (valid for 1 hour). This token generation logic must run server-side because it uses the private key. In Deno, the Web Crypto API provides the RSA signing capability needed to sign the JWT. The service account JSON contains the private key in PKCS#8 PEM format. The JWT payload specifies the service account email (iss), the target API scope (scope), the audience (aud: https://oauth2.googleapis.com/token), and expiry times. The code below implements this token exchange. It is designed as a helper module imported by the prediction Edge Function. Caching the token (and only refreshing it when it has less than 5 minutes remaining) is important for performance — a token exchange adds ~200ms of latency and should not happen on every prediction request. Ask Lovable to scaffold the full Edge Function using the prompt in the next step — it will incorporate this auth pattern automatically.

supabase/functions/_shared/google-auth.ts
1// supabase/functions/_shared/google-auth.ts
2// Helper to obtain a Google OAuth2 access token from a service account JSON key
3
4interface ServiceAccountKey {
5 client_email: string;
6 private_key: string;
7 token_uri: string;
8}
9
10interface TokenCache {
11 token: string;
12 expiresAt: number;
13}
14
15let tokenCache: TokenCache | null = null;
16
17async function pemToPrivateKey(pem: string): Promise<CryptoKey> {
18 const pemBody = pem
19 .replace('-----BEGIN PRIVATE KEY-----', '')
20 .replace('-----END PRIVATE KEY-----', '')
21 .replace(/\s/g, '');
22 const keyData = Uint8Array.from(atob(pemBody), (c) => c.charCodeAt(0));
23 return crypto.subtle.importKey(
24 'pkcs8',
25 keyData.buffer,
26 { name: 'RSASSA-PKCS1-v1_5', hash: 'SHA-256' },
27 false,
28 ['sign']
29 );
30}
31
32function base64urlEncode(data: string | ArrayBuffer): string {
33 const bytes = typeof data === 'string'
34 ? new TextEncoder().encode(data)
35 : new Uint8Array(data);
36 return btoa(String.fromCharCode(...bytes))
37 .replace(/\+/g, '-').replace(/\//g, '_').replace(/=/g, '');
38}
39
40export async function getGoogleAccessToken(scope: string): Promise<string> {
41 const now = Math.floor(Date.now() / 1000);
42
43 // Return cached token if still valid for > 5 minutes
44 if (tokenCache && tokenCache.expiresAt > now + 300) {
45 return tokenCache.token;
46 }
47
48 const saJson = Deno.env.get('GOOGLE_SERVICE_ACCOUNT_JSON')!;
49 const sa: ServiceAccountKey = JSON.parse(saJson);
50
51 const header = base64urlEncode(JSON.stringify({ alg: 'RS256', typ: 'JWT' }));
52 const payload = base64urlEncode(JSON.stringify({
53 iss: sa.client_email,
54 scope,
55 aud: sa.token_uri || 'https://oauth2.googleapis.com/token',
56 iat: now,
57 exp: now + 3600,
58 }));
59
60 const signingInput = `${header}.${payload}`;
61 const privateKey = await pemToPrivateKey(sa.private_key);
62 const signature = await crypto.subtle.sign(
63 'RSASSA-PKCS1-v1_5',
64 privateKey,
65 new TextEncoder().encode(signingInput)
66 );
67
68 const jwt = `${signingInput}.${base64urlEncode(signature)}`;
69
70 const tokenResponse = await fetch('https://oauth2.googleapis.com/token', {
71 method: 'POST',
72 headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
73 body: new URLSearchParams({
74 grant_type: 'urn:ietf:params:oauth:grant-type:jwt-bearer',
75 assertion: jwt,
76 }),
77 });
78
79 const tokenData = await tokenResponse.json();
80 if (!tokenData.access_token) {
81 throw new Error(`Token exchange failed: ${JSON.stringify(tokenData)}`);
82 }
83
84 tokenCache = { token: tokenData.access_token, expiresAt: now + 3600 };
85 return tokenData.access_token;
86}

Pro tip: The _shared directory is a Supabase convention for code shared between multiple Edge Functions. Files in _shared are not deployed as standalone functions but can be imported by any function using relative imports.

Expected result: The google-auth.ts helper module is created and imports correctly. No errors appear in the Lovable editor's code view.

3

Create the Vertex AI prediction Edge Function

With the auth helper in place, create the main prediction Edge Function. This function accepts prediction instances from the frontend, obtains a Google access token using the service account credentials, calls the Vertex AI REST prediction endpoint, and returns the predictions. The Vertex AI REST API for online predictions follows a consistent pattern across model types: POST to https://{LOCATION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}:predict with a body containing { instances: [...] }. The shape of each instance object depends entirely on the model's input schema — for AutoML tabular models it might be an object with named feature fields; for custom TF models it might be an array of numbers. The response contains a predictions array with one entry per input instance. For classification models, each prediction typically includes class labels and confidence scores. For regression models, it is a numerical value. The Edge Function returns this predictions array as JSON to the frontend. Use the Lovable prompt below to scaffold the function. Customize the instance preprocessing to match your specific model's input format by reviewing the 'Instance and batch predictions' documentation for your deployed model in the Vertex AI console.

Lovable Prompt

Create a Supabase Edge Function at supabase/functions/vertex-predict/index.ts that calls a Google Vertex AI online prediction endpoint. Import the getGoogleAccessToken helper from '../_shared/google-auth.ts'. Read VERTEX_PROJECT_ID, VERTEX_LOCATION, and VERTEX_ENDPOINT_ID from Deno.env.get(). Accept a POST request with an 'instances' array, call the Vertex AI predict REST endpoint with Bearer token authentication, and return the predictions array as JSON. Include CORS headers and error handling.

Paste this in Lovable chat

supabase/functions/vertex-predict/index.ts
1// supabase/functions/vertex-predict/index.ts
2import { getGoogleAccessToken } from '../_shared/google-auth.ts';
3
4const corsHeaders = {
5 'Access-Control-Allow-Origin': '*',
6 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
7};
8
9Deno.serve(async (req) => {
10 if (req.method === 'OPTIONS') return new Response('ok', { headers: corsHeaders });
11
12 try {
13 const { instances } = await req.json() as { instances: unknown[] };
14
15 if (!Array.isArray(instances) || instances.length === 0) {
16 return new Response(JSON.stringify({ error: 'instances array is required and must not be empty' }), {
17 status: 400,
18 headers: { ...corsHeaders, 'Content-Type': 'application/json' },
19 });
20 }
21
22 const projectId = Deno.env.get('VERTEX_PROJECT_ID')!;
23 const location = Deno.env.get('VERTEX_LOCATION')!;
24 const endpointId = Deno.env.get('VERTEX_ENDPOINT_ID')!;
25
26 const scope = 'https://www.googleapis.com/auth/cloud-platform';
27 const accessToken = await getGoogleAccessToken(scope);
28
29 const endpoint = `https://${location}-aiplatform.googleapis.com/v1/projects/${projectId}/locations/${location}/endpoints/${endpointId}:predict`;
30
31 const vertexResponse = await fetch(endpoint, {
32 method: 'POST',
33 headers: {
34 'Authorization': `Bearer ${accessToken}`,
35 'Content-Type': 'application/json',
36 },
37 body: JSON.stringify({ instances }),
38 });
39
40 if (!vertexResponse.ok) {
41 const errBody = await vertexResponse.text();
42 console.error(`Vertex AI error ${vertexResponse.status}:`, errBody);
43 return new Response(JSON.stringify({ error: `Vertex AI returned ${vertexResponse.status}` }), {
44 status: vertexResponse.status,
45 headers: { ...corsHeaders, 'Content-Type': 'application/json' },
46 });
47 }
48
49 const result = await vertexResponse.json();
50 return new Response(JSON.stringify({ predictions: result.predictions }), {
51 headers: { ...corsHeaders, 'Content-Type': 'application/json' },
52 });
53 } catch (error) {
54 console.error('vertex-predict error:', error);
55 return new Response(JSON.stringify({ error: String(error) }), {
56 status: 500,
57 headers: { ...corsHeaders, 'Content-Type': 'application/json' },
58 });
59 }
60});

Pro tip: Test the Edge Function with a minimal instance payload first before building the full UI. Use Lovable's Cloud → Logs to see the raw Vertex AI response and confirm the predictions array shape matches your expectations.

Expected result: The vertex-predict Edge Function deploys and a test request with a sample instances array returns a predictions array from Vertex AI. Cloud → Logs shows the Bearer token being obtained and the Vertex AI API returning 200 OK.

4

Build the React frontend for prediction input and results

With the Edge Function deployed, build the frontend component in Lovable that collects user input, calls the prediction endpoint, and displays results. The specific UI depends on your model — a tabular classifier needs form fields for each feature; an image model needs a file upload; a text model needs a textarea. The key frontend code is the call to supabase.functions.invoke('vertex-predict', { body: { instances: [instanceObject] } }). The instance object structure must exactly match what your Vertex AI model expects. Review your model's input schema in the Vertex AI console under Endpoints → your endpoint → Model details → Input schema to see the field names and types. For displaying predictions, AutoML classification models return a list of displayNames (class labels) and confidences (probability scores between 0 and 1). Sort by confidence descending and display the top prediction prominently with a bar chart or percentage for the top-3 results. Regression models return a single numerical value — display it with the appropriate unit and format. Describe your model's input and output to Lovable in the chat and it will generate the appropriate form fields, API call, and results display. For instance: 'My Vertex AI model takes customer_age (number), days_since_last_purchase (number), and total_spend (number) as inputs and returns a churn probability between 0 and 1. Build a form with those three fields and display the churn probability as a large percentage with a green/red color based on whether it is above or below 50%.'

Lovable Prompt

Build a prediction form component that calls the vertex-predict Supabase Edge Function. The form has fields for [describe your model inputs here]. On submit, call supabase.functions.invoke('vertex-predict') with the form values as an instances array. Display a loading state while waiting. Show the top prediction label and confidence score in a result card below the form. Handle and display any errors returned by the function.

Paste this in Lovable chat

Pro tip: For regulated industries where prediction requests contain PII or sensitive data, add Supabase Auth to the Edge Function and verify the JWT before making the Vertex AI call — this ensures only authenticated users can trigger predictions.

Expected result: The prediction form collects user input, calls Vertex AI through the Edge Function, and displays the prediction result. The full round-trip completes within 2-3 seconds for an online prediction endpoint that is already deployed and warm.

Common use cases

Serve a custom AutoML model trained on your business data

You trained a Google AutoML model on your product catalog, customer data, or support tickets using Vertex AI's AutoML feature. The model is deployed to a Vertex AI online prediction endpoint. Your Lovable app calls the Edge Function with new data instances, the function authenticates with Google Cloud, calls the endpoint, and returns predictions — category labels, scores, or regression values — to display in the UI.

Lovable Prompt

Create a Supabase Edge Function called 'vertex-predict' that authenticates with Google Cloud using a service account stored in GOOGLE_SERVICE_ACCOUNT_JSON, calls a Vertex AI online prediction endpoint, and returns predictions. The endpoint URL should come from a VERTEX_ENDPOINT_URL secret. Accept a POST request with an 'instances' array matching the AutoML model's input schema. Build a form where users enter product details and see the predicted category returned below the form.

Copy this prompt to try it in Lovable

Generate content with Gemini models via Vertex AI

Access Google's Gemini large language models through Vertex AI's generative AI endpoints. Unlike the Gemini API (which uses a simple API key), Vertex AI's Gemini endpoint uses service account authentication and gives access to Gemini Pro, Gemini Ultra, and enterprise model versions with higher rate limits and data residency guarantees. Use this for regulated industries where data must stay in a specific Google Cloud region.

Lovable Prompt

Create a Supabase Edge Function that calls the Vertex AI Gemini Pro endpoint using service account authentication. The function should accept a 'prompt' string, call the Vertex AI generativeLanguage endpoint for Gemini Pro in the us-central1 region, and return the generated text. Build a text generation UI where users enter a business prompt and see the Gemini-generated response.

Copy this prompt to try it in Lovable

Run batch predictions on uploaded datasets

Users upload a CSV file of data instances through your Lovable app. The Edge Function stores it in Supabase Storage, triggers a Vertex AI batch prediction job against a deployed model, polls for completion, and stores results back in the database for the user to view. Batch predictions are ideal for large datasets where real-time latency is not required and per-row prediction would be too expensive.

Lovable Prompt

Build a batch prediction feature: users upload a CSV file, the app stores it in Supabase Storage, then calls a Supabase Edge Function that creates a Vertex AI batch prediction job pointing at the uploaded file and a deployed model endpoint. Store the job ID in the database. Build a Jobs page showing prediction job status with a refresh button, and display a download link when the batch prediction CSV result is ready.

Copy this prompt to try it in Lovable

Troubleshooting

Edge Function returns 'Token exchange failed' with 'invalid_grant' or 'Invalid JWT Signature'

Cause: The service account JSON key pasted into Cloud → Secrets is malformed, truncated, or contains extra characters. JSON keys pasted into secret fields sometimes lose newlines in the private_key field, which breaks the PEM format.

Solution: Re-download the service account JSON key from Google Cloud Console → IAM & Admin → Service Accounts → Keys. When pasting into the Secrets panel, make sure the entire JSON object is pasted including opening and closing braces. If the private_key field contains literal \n escape sequences instead of actual newlines, the PEM parser will fail. Verify the stored JSON is valid by checking its format in a JSON validator before saving.

Vertex AI returns 403 Permission Denied or 'PERMISSION_DENIED: Permission denied on resource'

Cause: The service account does not have the required IAM role on the Google Cloud project, or the Vertex AI API is not enabled for the project.

Solution: In Google Cloud Console, go to IAM & Admin → IAM. Find your service account and verify it has the 'Vertex AI User' role (roles/aiplatform.user). If the role is missing, click the pencil edit icon and add it. Also verify that the Vertex AI API is enabled at APIs & Services → Enabled APIs. Search for 'Vertex AI API' and enable it if not already active.

Edge Function returns 'Vertex AI returned 404' and logs show 'Endpoint not found'

Cause: The VERTEX_ENDPOINT_ID, VERTEX_PROJECT_ID, or VERTEX_LOCATION secrets contain the wrong values, or the endpoint has been deleted or is not deployed in the active state.

Solution: Open Google Cloud Console → Vertex AI → Online predictions → Endpoints. Find your endpoint and verify the numeric endpoint ID in the URL matches VERTEX_ENDPOINT_ID. Also verify the region matches VERTEX_LOCATION (format: us-central1, europe-west1, etc.). Check that the endpoint status is 'Deployed' — if it shows 'Undeployed' or 'Creating', wait for deployment to complete before testing.

Predictions return unexpected results or very low confidence scores for all classes

Cause: The instance data sent to the prediction endpoint is not in the format the model expects — wrong field names, missing features, or unscaled numerical values that differ from the training data distribution.

Solution: Open the Vertex AI console, navigate to your endpoint, and use the 'Test your model' panel to send a manually crafted test instance and confirm the expected response shape. Compare the instance format that returns correct predictions with what your Edge Function is sending. Common issues: tabular models expect feature names matching training column names exactly (case-sensitive), and numerical features must be scaled the same way as in training if preprocessing was not included in the model pipeline.

Best practices

  • Cache the Google OAuth2 access token in a module-level variable and only refresh it when it has less than 5 minutes remaining — token exchange adds latency and hitting the token endpoint on every request is unnecessary since tokens are valid for 1 hour.
  • Store the service account JSON key in Cloud → Secrets as a single secret containing the full JSON string — avoid splitting it into individual fields for client_email and private_key, as reassembling them adds complexity and error surface area.
  • Use the principle of least privilege for service accounts — grant only the 'Vertex AI User' role (roles/aiplatform.user) rather than project Editor or Owner roles, limiting the blast radius if the key is ever compromised.
  • Add request validation in the Edge Function to verify the instances array structure before calling Vertex AI — invalid instance shapes return cryptic error messages from the prediction API that are hard to debug without server-side logging.
  • Monitor prediction latency and error rates in Cloud → Logs — Vertex AI online prediction endpoints have per-endpoint quota limits (queries per second) and will return 429 errors if exceeded, which should be handled with exponential backoff retry logic.
  • Use separate Vertex AI endpoints for development and production — link different VERTEX_ENDPOINT_ID secrets in your development and production Lovable projects to prevent test traffic from affecting production model metrics.
  • Review Vertex AI prediction costs before deploying user-facing features — online prediction endpoints charge per node-hour for the deployed endpoint plus per prediction request. Set billing alerts in Google Cloud Console to avoid unexpected charges.

Alternatives

Frequently asked questions

What is the difference between Google Vertex AI and calling the Gemini API directly?

The Gemini API (ai.google.dev) uses a simple API key and is the quickest way to access Gemini models. Vertex AI uses service account OAuth2 authentication and provides access to the same Gemini models plus enterprise features: data residency guarantees (your data stays in a specific Google Cloud region), higher rate limits, enterprise SLAs, and integration with the full Vertex AI platform for model training and MLOps. Use the Gemini API for rapid development and the Vertex AI endpoint for production deployments in regulated industries.

How much does a Vertex AI online prediction endpoint cost?

Vertex AI online prediction costs have two components: the endpoint node-hour cost (you pay per hour the endpoint is deployed, even with zero traffic) and the prediction request cost. For standard CPU nodes, node-hours cost approximately $0.05-0.18/hour depending on machine type. A minimum n1-standard-2 node running 24/7 costs roughly $75-130/month before any prediction traffic. Dedicated endpoints suitable for production typically run $150-400/month for always-on serving. Check the Vertex AI pricing page for current rates as they change frequently.

Can I use Vertex AI's AutoML without training my own model?

Vertex AI AutoML trains models automatically from labeled data you provide — you supply a dataset of examples with labels, AutoML selects the algorithm and hyperparameters, and produces a trained model you can deploy to an endpoint. You do not write any training code, but you do need labeled training data. For tabular data, 1,000+ labeled rows are recommended. For image classification, 100+ labeled images per class is the minimum. AutoML is the Google Cloud equivalent of H2O.ai's automated ML feature.

Why does the integration use service account JSON instead of an API key like other services?

Google Cloud uses OAuth2 service accounts for API authentication rather than simple API keys for security and auditability reasons. The service account JSON contains a cryptographic private key used to sign JWT tokens. This approach means credentials cannot be used by brute force (unlike API keys) and every API call is auditable to a specific service account identity in Google Cloud's audit logs. The tradeoff is more complex authentication code — the Edge Function must implement the JWT signing and token exchange flow rather than just adding a header with an API key value.

Can I use this integration to access other Google Cloud APIs, not just Vertex AI?

Yes. The Google OAuth2 service account authentication helper in the _shared/google-auth.ts module works for any Google Cloud API. Change the scope parameter to the appropriate API scope — for example, 'https://www.googleapis.com/auth/bigquery.readonly' for BigQuery read access, or 'https://www.googleapis.com/auth/cloud-vision' for the Vision API. Grant the corresponding IAM role to the service account and change the API endpoint URL. The token exchange pattern is identical across all Google Cloud REST APIs.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.