Skip to main content
RapidDev - Software Development Agency
v0-integrationsNext.js API Route

How to Integrate Backblaze B2 Cloud Storage with V0

To integrate Backblaze B2 with V0 by Vercel using the S3-compatible API, generate a file upload UI with V0, create a Next.js API route that handles uploads and downloads using the AWS SDK v3 configured for B2's S3-compatible endpoint, and store your B2 application key credentials in Vercel environment variables. B2 is 75-80% cheaper than AWS S3 with identical S3 API compatibility.

What you'll learn

  • How to configure the AWS SDK v3 to use Backblaze B2's S3-compatible endpoint instead of AWS
  • How to generate pre-signed upload and download URLs from a Next.js API route for secure browser-to-B2 uploads
  • How to build a file manager UI with V0 that lists, uploads, and downloads files from Backblaze B2
  • How to create and configure a Backblaze B2 bucket for public or private file access
  • How to store B2 application key credentials securely in Vercel environment variables
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate15 min read30 minutesStorageMarch 2026RapidDev Engineering Team
TL;DR

To integrate Backblaze B2 with V0 by Vercel using the S3-compatible API, generate a file upload UI with V0, create a Next.js API route that handles uploads and downloads using the AWS SDK v3 configured for B2's S3-compatible endpoint, and store your B2 application key credentials in Vercel environment variables. B2 is 75-80% cheaper than AWS S3 with identical S3 API compatibility.

S3-Compatible File Storage at 75% Lower Cost with Backblaze B2

Backblaze B2 Cloud Storage is the most widely used alternative to Amazon S3 for independent developers and startups. At $0.006/GB/month (compared to AWS S3's $0.023/GB/month), B2 offers 75-80% cost savings with S3 API compatibility — meaning you can use the AWS SDK v3 without any modifications, just pointing it to a different endpoint. For applications that store user uploads, generated files, backups, or media assets, B2 is the highest-value storage choice for developers who want reliability without AWS pricing.

B2's S3-compatible API endpoint is s3.{region}.backblazeb2.com, where region matches your bucket's region (e.g., us-west-004). This means the same Next.js API route patterns that work for AWS S3 work identically for B2 — create an S3Client with the B2 endpoint, and all standard S3 operations (PutObject, GetObject, DeleteObject, ListObjects, CreatePresignedPost, getSignedUrl) work without changes.

The pre-signed URL pattern is particularly important for file uploads from a browser: instead of routing the file through your Next.js server (which adds latency and counts against Vercel's function limits), your API route generates a time-limited pre-signed URL, and the browser uploads directly to B2's servers. This pattern scales to arbitrarily large files without hitting Vercel's 4.5MB request body limit.

Integration method

Next.js API Route

V0 generates the file management UI. A Next.js API route uses the AWS SDK v3 (@aws-sdk/client-s3) configured with Backblaze B2's S3-compatible endpoint — no new SDK required. Files are uploaded and downloaded via pre-signed URLs generated server-side, keeping your B2 credentials out of the browser.

Prerequisites

  • A Backblaze account — sign up free at backblaze.com; B2 has a free tier of 10GB storage and 1GB/day download
  • A Backblaze B2 bucket created in your desired region — note the bucket name and region (e.g., us-west-004)
  • A B2 application key — created from the Backblaze console under App Keys; note the key ID and application key value (shown only once on creation)
  • Node.js package manager access to install @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner
  • A V0 account and Next.js project deployed to Vercel

Step-by-step guide

1

Create a Backblaze B2 Bucket and Application Key

Before writing code, set up your Backblaze B2 storage infrastructure. Log into your Backblaze account at backblaze.com and navigate to 'B2 Cloud Storage' in the left sidebar. Click 'Create a Bucket' to create your first bucket. Choose a unique bucket name (it must be globally unique across all Backblaze accounts), select your region (choose the region closest to your users for best performance), and decide on public vs. private access. For most applications, select 'Private' — this requires application key authentication for all access and is the secure default. For public media libraries (images, files you want publicly accessible via URL), select 'Public'. After creating the bucket, create an Application Key with appropriate permissions. Go to 'App Keys' in the Backblaze sidebar and click 'Add a New Application Key'. Give the key a descriptive name, select your bucket (or 'All Buckets'), and set the permissions: 'Read and Write' is typical for an upload+download flow. For read-only access (download/list only), choose 'Read Only'. After creating the key, Backblaze shows you the Key ID and Application Key (secret). Copy both values immediately — the Application Key is shown only once. Also note your bucket's endpoint URL format: https://s3.{region}.backblazeb2.com — for example, https://s3.us-west-004.backblazeb2.com if your bucket is in the us-west-004 region. This S3-compatible endpoint is what you'll use in the AWS SDK configuration.

V0 Prompt

Create a file upload dashboard page. At the top, show a 'Storage Overview' card with total files and storage used. Below, add a drag-and-drop upload zone with a dashed border, cloud upload icon, and text 'Drop files here or click to browse'. Below the upload zone, show a files table with columns: Filename, Size, Date Uploaded, and Actions (Download, Delete). Show a progress indicator and file name while uploading. Use a clean, minimal design with a blue color scheme.

Paste this in V0 chat

Pro tip: When creating your B2 application key, create a dedicated key per application rather than using a master key — this allows you to revoke access for a specific app without affecting other services, and you can set per-bucket permissions to limit the key's blast radius if compromised.

Expected result: A Backblaze B2 bucket is created with appropriate access settings, and an Application Key with read/write permissions is generated and saved.

2

Install the AWS SDK and Configure for Backblaze B2

Install the AWS SDK v3 packages needed for S3-compatible operations. You need two packages: @aws-sdk/client-s3 for the main S3 client and operations, and @aws-sdk/s3-request-presigner for generating pre-signed URLs that allow browser-to-B2 direct uploads. Run npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner in your project. These packages total about 2-3MB and support tree-shaking, so only the commands you use are bundled in production. Create a shared B2 client utility that configures the AWS SDK to use Backblaze's S3-compatible endpoint instead of AWS. The key differences from a standard AWS S3 configuration are: the endpoint URL points to B2's region-specific endpoint, the region is set to the B2 region code (e.g., 'us-west-004'), and the credentials use your B2 Key ID as the Access Key and B2 Application Key as the Secret Key. PathStyleForcing (forcePathStyle: true) is important for B2 compatibility — B2's S3 API uses path-style URLs (e.g., https://s3.us-west-004.backblazeb2.com/bucket-name/file-key) rather than virtual-hosted-style URLs (https://bucket-name.s3.amazonaws.com/file-key). Setting forcePathStyle: true ensures the SDK uses the path-style format that B2 expects.

lib/b2.ts
1// lib/b2.ts — Shared Backblaze B2 client
2import { S3Client } from '@aws-sdk/client-s3';
3
4const B2_REGION = process.env.B2_REGION || 'us-west-004';
5
6export const b2Client = new S3Client({
7 endpoint: `https://s3.${B2_REGION}.backblazeb2.com`,
8 region: B2_REGION,
9 credentials: {
10 accessKeyId: process.env.B2_KEY_ID!,
11 secretAccessKey: process.env.B2_APPLICATION_KEY!,
12 },
13 forcePathStyle: true, // Required for B2 compatibility
14});
15
16export const B2_BUCKET = process.env.B2_BUCKET_NAME!;
17export const B2_PUBLIC_URL = `https://${B2_BUCKET}.s3.${B2_REGION}.backblazeb2.com`;

Pro tip: Create the b2 client module once and import it into your API routes — the S3Client initializes with credentials from environment variables, which are loaded when the serverless function starts up.

Expected result: The AWS SDK v3 is installed and a shared B2 client module is created, configured to use Backblaze B2's S3-compatible endpoint.

3

Create the Upload and Download API Routes

Build the Next.js API routes for file operations. The recommended pattern for browser uploads is pre-signed URLs — your API route generates a time-limited URL that the browser can POST directly to B2, avoiding routing large files through Vercel's serverless function (which has a 4.5MB request body limit). For the upload flow: (1) The browser sends the filename, content type, and file size to your /api/files/upload-url route. (2) The route generates a pre-signed PUT URL with a 15-minute expiry. (3) The browser receives the URL and uploads the file directly to B2 using a PUT request. (4) After upload, the browser calls /api/files/save to record the file metadata in your database. This pattern supports files of any size and doesn't consume Vercel serverless execution time for the actual upload. For the download flow: generate pre-signed GET URLs for private buckets. For public buckets, construct the URL directly without signing. For listing files, use the ListObjectsV2Command to return all objects in the bucket with their keys, sizes, and last-modified dates. Deletion uses the DeleteObjectCommand with the file key. Make sure to verify user authorization before deleting files — check that the requesting user owns the file before deleting it from B2.

app/api/files/route.ts
1// app/api/files/upload-url/route.ts
2import { NextResponse } from 'next/server';
3import { PutObjectCommand } from '@aws-sdk/client-s3';
4import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
5import { b2Client, B2_BUCKET } from '@/lib/b2';
6import { randomUUID } from 'crypto';
7
8export async function POST(request: Request) {
9 const { filename, contentType, fileSize } = await request.json();
10
11 if (!filename || !contentType) {
12 return NextResponse.json({ error: 'filename and contentType required' }, { status: 400 });
13 }
14
15 // Limit file size to 100MB
16 if (fileSize > 100 * 1024 * 1024) {
17 return NextResponse.json({ error: 'File size exceeds 100MB limit' }, { status: 400 });
18 }
19
20 const key = `uploads/${randomUUID()}-${filename.replace(/[^a-zA-Z0-9._-]/g, '_')}`;
21
22 const command = new PutObjectCommand({
23 Bucket: B2_BUCKET,
24 Key: key,
25 ContentType: contentType,
26 ContentLength: fileSize,
27 });
28
29 const uploadUrl = await getSignedUrl(b2Client, command, { expiresIn: 900 }); // 15 minutes
30
31 return NextResponse.json({ uploadUrl, key });
32}
33
34// app/api/files/route.ts — List and delete files
35import { ListObjectsV2Command, DeleteObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
36import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
37import { b2Client, B2_BUCKET } from '@/lib/b2';
38
39export async function GET() {
40 const command = new ListObjectsV2Command({
41 Bucket: B2_BUCKET,
42 Prefix: 'uploads/',
43 MaxKeys: 100,
44 });
45
46 const data = await b2Client.send(command);
47
48 const files = (data.Contents || []).map((obj) => ({
49 key: obj.Key!,
50 filename: obj.Key!.split('/').pop() || obj.Key!,
51 size: obj.Size || 0,
52 lastModified: obj.LastModified?.toISOString(),
53 }));
54
55 return NextResponse.json({ files });
56}
57
58export async function DELETE(request: Request) {
59 const { key } = await request.json();
60 await b2Client.send(new DeleteObjectCommand({ Bucket: B2_BUCKET, Key: key }));
61 return NextResponse.json({ success: true });
62}

Pro tip: For public buckets, you don't need to generate pre-signed download URLs — just construct the URL directly as https://{bucket}.s3.{region}.backblazeb2.com/{key}. Pre-signed URLs are only needed for private buckets or for upload operations.

Expected result: The upload URL endpoint returns a pre-signed PUT URL for B2, the list endpoint returns files in the bucket, and the delete endpoint removes files from B2.

4

Add B2 Credentials to Vercel Environment Variables

Store your Backblaze B2 credentials in Vercel's environment variable system. You need four values: B2 Key ID, B2 Application Key, bucket name, and region. The Key ID and Application Key are server-only secrets — they must never be accessible to the browser. In Vercel Dashboard → Settings → Environment Variables, create four variables: B2_KEY_ID (the numeric Key ID from Backblaze), B2_APPLICATION_KEY (the 31-character application key shown when you created the key — if you lost it, create a new one), B2_BUCKET_NAME (your bucket name), and B2_REGION (your bucket's region code, e.g., 'us-west-004'). All four should be server-only — no NEXT_PUBLIC_ prefix — because they're only used in API routes. Select all three scopes (Production, Preview, Development) for each variable. For local development, add all four to .env.local. If you lost your Application Key (Backblaze only shows it once), go to Backblaze console → App Keys, delete the old key, and create a new one with the same permissions. After creating the new key, update LEADSQUARED_APPLICATION_KEY in all environments where it's used. After saving all variables in Vercel, trigger a redeployment for the changes to take effect.

.env.local
1# .env.local
2B2_KEY_ID=your_b2_key_id_here
3B2_APPLICATION_KEY=your_b2_application_key_here
4B2_BUCKET_NAME=your-bucket-name
5B2_REGION=us-west-004

Pro tip: If you want to enable public CDN access to your B2 files via Cloudflare (to take advantage of free egress), set up a Cloudflare R2 bucket as a cache in front of B2, or configure a custom domain pointing to your B2 bucket through Cloudflare's CDN — egress from B2 through Cloudflare is free.

Expected result: All four B2 environment variables are in Vercel, and the file list API returns files from your B2 bucket after redeployment.

5

Connect the File Upload UI to the API Routes

Update the V0-generated file manager component to perform actual uploads and file listing using the two-step pre-signed URL flow. This requires client-side JavaScript to handle the file selection, call your upload-url API, then PUT the file directly to B2. In the React component, add a file input with onChange handler. When files are selected, iterate through them and for each: (1) call POST /api/files/upload-url with the filename, content type, and size — this returns a pre-signed URL and file key; (2) PUT the file directly to the returned URL using the browser's fetch API with the file as the body and Content-Type header set; (3) track upload progress using XMLHttpRequest (which supports progress events) rather than fetch for progress bar functionality. After all uploads complete, refresh the file list by calling GET /api/files. Update the table state with the new file list. For downloads, call GET /api/files/download-url?key={fileKey} to get a pre-signed download URL, then open it in a new tab or trigger a browser download. For a polished user experience, show individual file upload progress bars, allow multiple file uploads simultaneously (Promise.all for parallel uploads), and display file type icons based on MIME type or file extension.

V0 Prompt

Update the file manager to perform real uploads. When files are dropped or selected, show each file in a 'Pending Uploads' section with a progress bar. For each file, call /api/files/upload-url to get a signed URL, then upload directly to that URL tracking progress. After upload, refresh the file list from /api/files. For the download button, call /api/files/download-url?key={key} and open the returned URL. For the delete button, call DELETE /api/files with the file key.

Paste this in V0 chat

components/FileUploader.tsx
1'use client';
2import { useState, useCallback } from 'react';
3
4export function FileUploader({ onUploadComplete }: { onUploadComplete: () => void }) {
5 const [uploading, setUploading] = useState<Record<string, number>>({});
6
7 const uploadFile = useCallback(async (file: File) => {
8 // Step 1: Get pre-signed upload URL
9 const urlRes = await fetch('/api/files/upload-url', {
10 method: 'POST',
11 headers: { 'Content-Type': 'application/json' },
12 body: JSON.stringify({
13 filename: file.name,
14 contentType: file.type || 'application/octet-stream',
15 fileSize: file.size,
16 }),
17 });
18 const { uploadUrl, key } = await urlRes.json();
19
20 // Step 2: Upload directly to B2 using XHR for progress tracking
21 await new Promise<void>((resolve, reject) => {
22 const xhr = new XMLHttpRequest();
23 xhr.upload.onprogress = (e) => {
24 if (e.lengthComputable) {
25 setUploading((prev) => ({
26 ...prev,
27 [file.name]: Math.round((e.loaded / e.total) * 100),
28 }));
29 }
30 };
31 xhr.onload = () => (xhr.status === 200 ? resolve() : reject(new Error('Upload failed')));
32 xhr.onerror = () => reject(new Error('Network error'));
33 xhr.open('PUT', uploadUrl);
34 xhr.setRequestHeader('Content-Type', file.type || 'application/octet-stream');
35 xhr.send(file);
36 });
37
38 setUploading((prev) => { const next = { ...prev }; delete next[file.name]; return next; });
39 onUploadComplete();
40 }, [onUploadComplete]);
41
42 const handleFileChange = (e: React.ChangeEvent<HTMLInputElement>) => {
43 const files = Array.from(e.target.files || []);
44 files.forEach(uploadFile);
45 };
46
47 return (
48 <div className="border-2 border-dashed border-gray-300 rounded-lg p-8 text-center">
49 <input type="file" multiple onChange={handleFileChange} className="hidden" id="file-input" />
50 <label htmlFor="file-input" className="cursor-pointer">
51 <div className="text-gray-500">Drop files here or click to browse</div>
52 </label>
53 {Object.entries(uploading).map(([name, progress]) => (
54 <div key={name} className="mt-3">
55 <div className="text-sm text-left mb-1">{name}</div>
56 <div className="w-full bg-gray-200 rounded-full h-2">
57 <div className="bg-blue-500 h-2 rounded-full transition-all" style={{ width: `${progress}%` }} />
58 </div>
59 </div>
60 ))}
61 </div>
62 );
63}

Pro tip: Use XMLHttpRequest instead of fetch for the actual file upload to B2 — XHR supports upload progress events while fetch does not (unless using the newer Streams API, which has limited browser support). This gives users real-time upload progress feedback.

Expected result: Files can be uploaded via the drag-and-drop zone with a live progress bar, the file list updates after upload completes, and download and delete buttons work correctly.

Common use cases

User File Upload and Management

Allow users to upload files (documents, images, videos) to your application's private B2 bucket. The API route generates a pre-signed upload URL, the browser uploads directly to B2, and after upload a metadata record is saved to your database. Users can view, download, and delete their files from a V0-generated file manager.

V0 Prompt

Create a file manager page with a drag-and-drop upload zone at the top and a file list below. The upload zone shows a cloud icon and 'Drop files here or click to browse'. The file list shows file name, size, upload date, and a row of action buttons (Download, Copy Link, Delete). Show a progress bar while files are uploading. Fetch the file list from /api/files.

Copy this prompt to try it in V0

Image Asset Library for CMS

Build a media library for a content management workflow where editors upload images to B2 and get public CDN URLs to use in blog posts and pages. The V0-generated interface shows a filterable grid of uploaded images with thumbnails, and generates a CDN URL for each image.

V0 Prompt

Create a media library page with a grid of image thumbnails. Each card shows the image preview, filename, file size, and a 'Copy URL' button. Add an upload button in the top right that opens a file picker for images only. Add a search input to filter images by filename. Fetch from /api/media/list.

Copy this prompt to try it in V0

Backup and File Archive Service

Build an automated backup interface where files are uploaded to B2 with lifecycle policies, and the V0 dashboard shows storage usage, recent uploads, and download/restore capabilities.

V0 Prompt

Create a backup dashboard with three summary cards showing total files, total storage used, and last backup time. Below, show a table of backup files with columns for filename, size, backup date, and a download button. Add an 'Upload Backup' button that accepts .zip and .tar.gz files. Fetch storage data from /api/backups.

Copy this prompt to try it in V0

Troubleshooting

Upload to pre-signed URL fails with CORS error in the browser

Cause: Backblaze B2 buckets require CORS configuration to allow browser-side direct uploads. By default, B2 does not allow cross-origin requests from browsers.

Solution: Configure CORS on your B2 bucket in the Backblaze console: go to Buckets → Your Bucket → CORS Rules → Add CORS Rule. Allow your app's domain as an allowed origin, with PUT and GET as allowed operations. In the Backblaze console under CORS, add your Vercel domain (https://your-app.vercel.app) as an allowed origin.

typescript
1// Backblaze B2 CORS rule JSON (paste in bucket CORS settings)
2[
3 {
4 "corsRuleName": "allowUploads",
5 "allowedOrigins": ["https://your-app.vercel.app"],
6 "allowedHeaders": ["*"],
7 "allowedOperations": ["b2_upload_file", "s3_put", "s3_get"],
8 "maxAgeSeconds": 3600
9 }
10]

AWS SDK throws 'UnknownEndpoint' or connection refused when configured for B2

Cause: The S3Client endpoint URL format is incorrect for B2, or forcePathStyle is not set to true.

Solution: Verify the endpoint format: it must be https://s3.{region}.backblazeb2.com (not the bucket name URL). Confirm forcePathStyle: true is set in the S3Client config. Check that B2_REGION matches exactly the region shown in your Backblaze bucket settings.

typescript
1// Correct B2 endpoint format
2new S3Client({
3 endpoint: `https://s3.${process.env.B2_REGION}.backblazeb2.com`,
4 region: process.env.B2_REGION,
5 forcePathStyle: true, // Critical for B2
6 credentials: {
7 accessKeyId: process.env.B2_KEY_ID!,
8 secretAccessKey: process.env.B2_APPLICATION_KEY!,
9 }
10})

ListObjectsV2 returns empty results despite files existing in the bucket

Cause: The application key may not have ListBuckets permission, or the Prefix parameter doesn't match your file structure, or there's a typo in the bucket name.

Solution: Check that your B2 application key has 'listFiles' permission in the Backblaze console. Remove the Prefix parameter temporarily to list all objects and confirm the bucket and key work. Verify the bucket name in B2_BUCKET_NAME matches exactly (B2 bucket names are case-sensitive).

Best practices

  • Use pre-signed URLs for all browser uploads — never route files through your Next.js API route, which has a 4.5MB body limit and adds unnecessary latency
  • Set CORS rules on your B2 bucket to only allow your specific Vercel domain as an origin — avoid wildcard (*) origins in production
  • Generate unique file keys using UUID + sanitized filename to prevent collisions and directory traversal attacks
  • Implement server-side file size and MIME type validation in your upload-url API route before generating the pre-signed URL
  • Use a folder prefix structure in your B2 bucket (e.g., uploads/userId/) to organize files by user or feature, making management and lifecycle policies easier to apply
  • Set appropriate Cache-Control headers when generating pre-signed PutObject commands to control CDN caching behavior for public files
  • Combine B2 with Cloudflare as a CDN layer — Cloudflare's free plan proxies B2 downloads with no egress fees, dramatically improving performance for globally distributed users

Alternatives

Frequently asked questions

Is the Backblaze B2 S3-compatible API actually compatible with the AWS SDK?

Yes — Backblaze B2's S3-compatible API is fully compatible with AWS SDK v3 when you configure the endpoint and forcePathStyle settings correctly. Standard operations like PutObject, GetObject, DeleteObject, ListObjectsV2, and pre-signed URLs all work identically. B2 deliberately maintains this compatibility so existing S3 codebases can migrate with minimal changes.

How much cheaper is Backblaze B2 compared to AWS S3?

B2 costs $0.006/GB/month for storage versus S3's $0.023/GB/month — about 75% cheaper. Download bandwidth is $0.01/GB from B2 versus $0.09/GB from AWS. For applications with 1TB of storage and heavy downloads, B2 can save thousands of dollars per month. Bandwidth to Cloudflare's CDN is free for both services.

How do I make Backblaze B2 files publicly accessible via URL?

Set your B2 bucket to 'Public' access during creation, or change it in the bucket settings. For public buckets, the direct URL format is https://f004.backblazeb2.com/file/{bucket-name}/{file-key} or the S3-style URL https://{bucket-name}.s3.{region}.backblazeb2.com/{file-key}. For private buckets, generate pre-signed download URLs from your API route.

What is the maximum file size I can upload to Backblaze B2?

B2's S3-compatible API supports individual file uploads up to 5GB using standard PutObject. For files larger than 5GB, use multipart upload (CreateMultipartUpload, UploadPart, CompleteMultipartUpload). B2's maximum object size is 10TB. The pre-signed URL pattern works for files up to 5GB — for larger files, implement the multipart upload flow.

Does Backblaze B2 work with Cloudflare for free CDN?

Yes — Backblaze and Cloudflare have a formal partnership where data transfer between B2 and Cloudflare is free (no egress fees). Configure your B2 bucket's custom domain through Cloudflare's DNS, and Cloudflare caches your B2 content globally. This combination gives you cheap storage (B2) plus free global CDN delivery (Cloudflare) — a very popular architecture for media-heavy applications.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.