To integrate Replit with Backblaze B2, install the AWS SDK v3 (B2 is S3-compatible), store your B2 application key ID and secret in Replit Secrets (lock icon π), and point the S3Client at the B2 endpoint. Backblaze B2 costs roughly a quarter of AWS S3 β $0.006/GB vs $0.023/GB β and egress is free when paired with Cloudflare CDN. Use an Autoscale deployment for stateless file operations.
Budget Cloud Storage for Replit: Backblaze B2
Backblaze B2 offers the same core capabilities as AWS S3 β object storage, presigned URLs, S3-compatible API, global availability β at a fraction of the cost. At $0.006/GB per month for storage and $0.01/GB for egress (or $0 with Cloudflare), it is the go-to choice for Replit developers who need reliable file storage without the AWS bill. B2 is particularly attractive for media-heavy applications: image hosting, video storage, backup archives, and user-generated content platforms where storage volume drives most of the cost.
The S3-compatible API means you do not need to learn a new SDK. The AWS SDK v3 for JavaScript and boto3 for Python both work with B2 by simply overriding the endpoint URL. Your existing S3 code migrates to B2 by changing three configuration values: endpoint, key ID, and application key. This makes B2 an easy cost-optimization step once you have already built S3 integration.
For zero-egress delivery, Backblaze has a bandwidth alliance with Cloudflare. If your B2 bucket is in the us-west-001 or eu-central-003 region and you proxy requests through Cloudflare, egress from B2 to Cloudflare is completely free. This makes B2+Cloudflare one of the cheapest ways to serve files globally from a Replit-powered backend.
Integration method
Backblaze B2 implements the S3-compatible API, so you can use the AWS SDK v3 for Node.js or boto3 for Python to interact with it β just override the endpoint URL to point at Backblaze's regional endpoint. Your Replit server stores B2 application credentials in Replit Secrets, creates an S3Client configured for B2, and performs standard operations like PutObject, GetObject, and presigned URL generation. Because the API surface is the same as S3, migration between AWS S3 and B2 is a matter of changing endpoint and credentials.
Prerequisites
- A Replit account with a Node.js or Python Repl ready
- A Backblaze account (free tier includes 10GB storage)
- A Backblaze B2 bucket created in your chosen region
- A B2 Application Key with read/write access to your bucket (created in B2 console)
Step-by-step guide
Create a B2 Bucket and Application Key
Create a B2 Bucket and Application Key
Log into your Backblaze account at backblaze.com and navigate to the B2 Cloud Storage section. Click 'Create a Bucket'. Choose a globally unique bucket name (e.g., myapp-uploads-2026), select 'Private' for the file access setting unless you specifically want public files, and choose your preferred region β us-west-001 for North America, eu-central-003 for Europe. Leave the other settings at defaults and click 'Create a Bucket'. Next, create an Application Key specifically for your Replit app. Click 'App Keys' in the left sidebar, then 'Add a New Application Key'. Give it a descriptive name like replit-myapp. Under 'Allow access to Bucket(s)', select only your specific bucket β this limits the key to that one bucket. Under 'Type of Access', select 'Read and Write'. Set an optional prefix if you want to restrict the key to a specific folder within the bucket. Click 'Create New Key'. Backblaze shows you the keyID and applicationKey once. The applicationKey is the equivalent of an AWS secret access key and CANNOT be retrieved again after you close the dialog. Copy both values immediately and store them somewhere secure before proceeding. Also note your bucket's endpoint hostname. It follows the format s3.{region}.backblazeb2.com β for example, s3.us-west-001.backblazeb2.com or s3.eu-central-003.backblazeb2.com. You will need this endpoint URL when configuring the S3 client in your Replit app.
Pro tip: Create a separate Application Key for each environment (development, staging, production). If a key is compromised, you can delete it without affecting other environments. Restrict each key to read-only or write-only access if your app only performs one type of operation.
Expected result: A B2 bucket exists in your chosen region. An Application Key with read/write access to that bucket has been created. You have copied the keyID, applicationKey, bucket name, and regional endpoint.
Store B2 Credentials in Replit Secrets
Store B2 Credentials in Replit Secrets
Backblaze B2 credentials must never appear in your source code. Click the lock icon (π) in the left Replit sidebar to open the Secrets pane. Add the following secrets: B2_KEY_ID: your Application Key's keyID (not the master account key β use the app key created in Step 1). B2_APPLICATION_KEY: your Application Key's applicationKey string. B2_BUCKET_NAME: your bucket name (e.g., myapp-uploads-2026). B2_ENDPOINT: your bucket's S3-compatible endpoint (e.g., https://s3.us-west-001.backblazeb2.com). B2_REGION: the region identifier matching your endpoint (e.g., us-west-001). The B2 S3-compatible API uses keyID as the AWS Access Key ID equivalent and applicationKey as the AWS Secret Access Key equivalent. You will pass these explicitly to the S3Client constructor (unlike AWS, where the SDK has special environment variable handling for AWS_ACCESS_KEY_ID). Replit's Secret Scanner monitors your code for accidental credential exposure β if you accidentally type a credential into a code file, Replit will warn you. The Secrets pane is the only appropriate place for these values.
1// Verify all B2 secrets are present at startup2const required = ['B2_KEY_ID', 'B2_APPLICATION_KEY', 'B2_BUCKET_NAME', 'B2_ENDPOINT', 'B2_REGION'];3for (const key of required) {4 if (!process.env[key]) {5 throw new Error(`Missing required secret: ${key}. Add it in Replit Secrets (lock icon π).`);6 }7}8console.log('B2 secrets verified.');9console.log('Endpoint:', process.env.B2_ENDPOINT);10console.log('Bucket:', process.env.B2_BUCKET_NAME);Pro tip: Unlike the AWS SDK which auto-reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the B2 S3-compatible client requires you to pass credentials explicitly as accessKeyId and secretAccessKey in the credentials object. The naming is intentional β B2 maps its keyID to AWS's accessKeyId slot.
Expected result: All five B2 secrets appear in the Replit Secrets pane. The startup check script prints the endpoint and bucket name without throwing any errors.
Upload and Download Files Using the AWS SDK v3 (Node.js)
Upload and Download Files Using the AWS SDK v3 (Node.js)
Install the required packages in the Shell tab: npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner. The AWS SDK v3 works with B2 by overriding the endpoint URL in the S3Client configuration. You pass your B2 keyID and applicationKey as the credentials, set the region to your B2 region string, and set the endpoint to your B2 S3-compatible URL. One important difference from standard AWS S3: B2 requires forcePathStyle to be set to true in the client configuration. Without this, the SDK will attempt to use virtual-hosted-style URLs (bucket.s3.amazonaws.com) which do not work with B2. Always include forcePathStyle: true when targeting B2. All standard S3 operations work: PutObjectCommand for uploads, GetObjectCommand for downloads, DeleteObjectCommand for deletes, ListObjectsV2Command for listing bucket contents. Presigned URLs work identically to AWS S3 β use getSignedUrl from @aws-sdk/s3-request-presigner to generate temporary upload or download URLs. For production applications, generate presigned PUT URLs server-side and return them to the browser so large file uploads go directly to B2 without passing through your Replit server. This is the recommended pattern for user file uploads regardless of whether you are using AWS S3 or B2.
1// b2.js β Backblaze B2 via S3-compatible API, AWS SDK v3, Node.js on Replit2const { S3Client, PutObjectCommand, GetObjectCommand, DeleteObjectCommand, ListObjectsV2Command } = require('@aws-sdk/client-s3');3const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');4const express = require('express');56// B2 S3-compatible client configuration7const b2 = new S3Client({8 endpoint: process.env.B2_ENDPOINT, // e.g., https://s3.us-west-001.backblazeb2.com9 region: process.env.B2_REGION, // e.g., us-west-00110 credentials: {11 accessKeyId: process.env.B2_KEY_ID, // B2 Application Key ID12 secretAccessKey: process.env.B2_APPLICATION_KEY // B2 Application Key13 },14 forcePathStyle: true // REQUIRED for B2 S3-compatible API15});1617const BUCKET = process.env.B2_BUCKET_NAME;18const app = express();19app.use(express.json());2021// Upload file buffer to B222async function uploadToB2(key, buffer, contentType) {23 await b2.send(new PutObjectCommand({24 Bucket: BUCKET,25 Key: key,26 Body: buffer,27 ContentType: contentType28 }));29 // Return the direct B2 file URL (use CDN URL in production)30 return `${process.env.B2_ENDPOINT}/${BUCKET}/${key}`;31}3233// Generate a presigned PUT URL for direct browser-to-B2 uploads34async function getPresignedUploadUrl(key, contentType, expiresIn = 300) {35 const command = new PutObjectCommand({36 Bucket: BUCKET,37 Key: key,38 ContentType: contentType39 });40 return getSignedUrl(b2, command, { expiresIn });41}4243// Generate a presigned GET URL for private file access44async function getPresignedDownloadUrl(key, expiresIn = 3600) {45 const command = new GetObjectCommand({ Bucket: BUCKET, Key: key });46 return getSignedUrl(b2, command, { expiresIn });47}4849// API: request a presigned upload URL50app.post('/api/upload-url', async (req, res) => {51 const { filename, contentType } = req.body;52 const allowed = ['image/jpeg', 'image/png', 'image/gif', 'application/pdf', 'video/mp4'];53 if (!allowed.includes(contentType)) {54 return res.status(400).json({ error: 'Content type not allowed' });55 }56 const safe = filename.replace(/[^a-zA-Z0-9._-]/g, '_');57 const key = `uploads/${Date.now()}-${safe}`;58 try {59 const uploadUrl = await getPresignedUploadUrl(key, contentType);60 res.json({ uploadUrl, key });61 } catch (err) {62 res.status(500).json({ error: err.message });63 }64});6566// API: list files in the bucket67app.get('/api/files', async (req, res) => {68 try {69 const result = await b2.send(new ListObjectsV2Command({ Bucket: BUCKET, MaxKeys: 100 }));70 const files = (result.Contents || []).map(obj => ({ key: obj.Key, size: obj.Size, lastModified: obj.LastModified }));71 res.json({ files });72 } catch (err) {73 res.status(500).json({ error: err.message });74 }75});7677app.listen(3000, '0.0.0.0', () => console.log('B2 server running on port 3000'));Pro tip: Set forcePathStyle: true in the S3Client config β without it, the SDK generates bucket.endpoint URLs instead of endpoint/bucket paths, which causes B2 to return a 'NoSuchBucket' error even when the bucket exists.
Expected result: POST /api/upload-url returns a presigned B2 upload URL. Files uploaded to that URL appear in your B2 bucket in the Backblaze console. GET /api/files returns the list of objects in the bucket.
Python Integration Using boto3
Python Integration Using boto3
For Python Replit projects, install boto3 in the Shell tab: pip install boto3. The same forcePathStyle concept applies in boto3 β you configure it via the config parameter. Pass your B2 credentials explicitly since boto3 will not find them in the standard AWS_* environment variables (unless you name them identically, which is not recommended as it creates confusion). Create a boto3 client with the endpoint_url set to your B2 S3-compatible endpoint, pass credentials from environment variables, and set the region to your B2 region. From there, all standard S3 operations work identically: put_object, get_object, delete_object, generate_presigned_url. For Flask apps, create the boto3 client once at module level β boto3 clients are thread-safe and reusing a single instance is more efficient than creating one per request. The generate_presigned_url method works the same way as with AWS S3: specify the operation ('put_object' or 'get_object'), pass the Bucket and Key params, and set ExpiresIn in seconds. For Replit deployments serving media content, use Reserved VM if you have a background process that continuously processes or transforms files. For typical upload/download API endpoints, Autoscale deployment is more cost-efficient.
1# b2_utils.py β Backblaze B2 via S3-compatible API, boto3, Python on Replit2import boto33import os4from botocore.config import Config5from flask import Flask, request, jsonify67# Configure boto3 for B2's S3-compatible endpoint8b2_client = boto3.client(9 's3',10 endpoint_url=os.environ['B2_ENDPOINT'],11 aws_access_key_id=os.environ['B2_KEY_ID'],12 aws_secret_access_key=os.environ['B2_APPLICATION_KEY'],13 region_name=os.environ['B2_REGION'],14 config=Config(signature_version='s3v4')15)1617BUCKET = os.environ['B2_BUCKET_NAME']18app = Flask(__name__)1920def upload_file(key: str, data: bytes, content_type: str) -> str:21 """Upload bytes to B2 and return file URL."""22 b2_client.put_object(23 Bucket=BUCKET,24 Key=key,25 Body=data,26 ContentType=content_type27 )28 return f"{os.environ['B2_ENDPOINT']}/{BUCKET}/{key}"2930def get_upload_url(key: str, content_type: str, expires: int = 300) -> str:31 """Generate a presigned PUT URL for direct browser uploads."""32 return b2_client.generate_presigned_url(33 'put_object',34 Params={'Bucket': BUCKET, 'Key': key, 'ContentType': content_type},35 ExpiresIn=expires36 )3738def get_download_url(key: str, expires: int = 3600) -> str:39 """Generate a presigned GET URL for private file access."""40 return b2_client.generate_presigned_url(41 'get_object',42 Params={'Bucket': BUCKET, 'Key': key},43 ExpiresIn=expires44 )4546@app.route('/api/upload-url', methods=['POST'])47def request_upload_url():48 data = request.get_json()49 content_type = data.get('contentType', '')50 filename = data.get('filename', 'file')51 allowed = {'image/jpeg', 'image/png', 'image/gif', 'application/pdf', 'video/mp4'}52 if content_type not in allowed:53 return jsonify({'error': 'Content type not allowed'}), 40054 safe = ''.join(c if c.isalnum() or c in '._-' else '_' for c in filename)55 import time56 key = f'uploads/{int(time.time())}-{safe}'57 url = get_upload_url(key, content_type)58 return jsonify({'uploadUrl': url, 'key': key})5960if __name__ == '__main__':61 app.run(host='0.0.0.0', port=3000)Pro tip: Always use Config(signature_version='s3v4') when creating the boto3 client for B2. B2 requires Signature Version 4 signing. Without this, boto3 may default to an older signing method that B2 rejects.
Expected result: The Flask app starts and responds to POST /api/upload-url with a presigned B2 upload URL. Uploads using that URL appear in your B2 bucket within seconds.
Common use cases
User Media Upload and Hosting
Accept photo, video, or document uploads from users via your Replit server and store them in B2 at a fraction of S3 pricing. Generate presigned upload URLs server-side so files upload directly from the browser to B2, then serve them via Cloudflare CDN for fast, free delivery.
Build a file upload API on Express that generates B2 presigned PUT URLs, returns the CDN-proxied file URL, and stores file metadata in a database.
Copy this prompt to try it in Replit
Application Backup Storage
Automatically back up database exports, generated reports, or application state to Backblaze B2 on a schedule. B2's low storage costs and durable storage make it ideal for keeping 30-day rolling backups without worrying about storage bills.
Create a Node.js script that dumps a PostgreSQL database to a compressed file and uploads it to B2 with a timestamped filename, retaining only the last 30 backups.
Copy this prompt to try it in Replit
Static Asset Storage with CDN
Store your app's static assets β product images, downloadable templates, audio files β in Backblaze B2 and deliver them via Cloudflare CDN for global performance. With Cloudflare's bandwidth alliance, you pay nothing for egress.
Build an asset management endpoint that uploads files to B2, returns both the raw B2 URL and the Cloudflare-proxied CDN URL, and supports listing and deleting assets by prefix.
Copy this prompt to try it in Replit
Troubleshooting
InvalidAccessKeyId or AuthorizationHeaderMalformed error when connecting to B2
Cause: The B2 keyID or applicationKey in Replit Secrets contains a typo or extra whitespace. Also occurs if you accidentally used the master account credentials instead of an Application Key, or if the key was created under a different B2 account.
Solution: Open Replit Secrets (lock icon π), delete and re-enter B2_KEY_ID and B2_APPLICATION_KEY. Make sure you are using the Application Key ID (not the master account key ID). Application Key IDs are longer strings that start with '0' followed by many digits. Verify by logging into the B2 console and checking App Keys.
1// Test B2 credentials by listing bucket contents2const { ListBucketsCommand } = require('@aws-sdk/client-s3');3try {4 const result = await b2.send(new ListBucketsCommand({}));5 console.log('B2 auth OK. Buckets:', result.Buckets.map(b => b.Name));6} catch (err) {7 console.error('B2 auth failed:', err.name, err.message);8}NoSuchBucket error even though the bucket exists in the B2 console
Cause: forcePathStyle is not set to true in the S3Client configuration. Without it, the AWS SDK generates virtual-hosted-style URLs like bucket-name.s3.us-west-001.backblazeb2.com instead of path-style URLs, which B2 does not support.
Solution: Add forcePathStyle: true to your S3Client constructor. This forces the SDK to use path-style URLs (endpoint/bucket/key) instead of subdomain-style URLs.
1const b2 = new S3Client({2 endpoint: process.env.B2_ENDPOINT,3 region: process.env.B2_REGION,4 credentials: {5 accessKeyId: process.env.B2_KEY_ID,6 secretAccessKey: process.env.B2_APPLICATION_KEY7 },8 forcePathStyle: true // <-- ADD THIS9});Presigned URL returns 403 when browser tries to upload
Cause: The Content-Type header in the browser's PUT request does not match the ContentType that was specified when generating the presigned URL. B2 validates this as part of the request signature.
Solution: Ensure your client-side upload code sends the exact same Content-Type value that was passed to the presigned URL generation. If your frontend determines the content type from the file object, pass that value to the server's upload-url endpoint and use it when generating the presigned URL.
1// Client-side: match content-type exactly2async function uploadToB2(presignedUrl, file) {3 const response = await fetch(presignedUrl, {4 method: 'PUT',5 body: file,6 headers: { 'Content-Type': file.type } // Must match what server used7 });8 if (!response.ok) throw new Error(`Upload failed: ${response.status}`);9}SignatureDoesNotMatch error on every request
Cause: The B2 region string in B2_REGION does not match the endpoint. For example, using 'us-west-1' (AWS region) instead of 'us-west-001' (B2 region). Or the applicationKey contains extra whitespace copied from the B2 console.
Solution: Check your B2 region identifier in the Backblaze console β B2 regions use formats like us-west-001, eu-central-003, not standard AWS region names. Update B2_REGION in Replit Secrets to match exactly. Also re-enter B2_APPLICATION_KEY to eliminate whitespace issues.
1// Log region and endpoint for debugging2console.log('B2 Region:', JSON.stringify(process.env.B2_REGION)); // Quotes show whitespace3console.log('B2 Endpoint:', JSON.stringify(process.env.B2_ENDPOINT));Best practices
- Store B2_KEY_ID, B2_APPLICATION_KEY, B2_BUCKET_NAME, B2_ENDPOINT, and B2_REGION all in Replit Secrets (lock icon π) β never hardcode credentials
- Always set forcePathStyle: true in the S3Client configuration β B2 requires path-style URLs and will return NoSuchBucket without this flag
- Create dedicated Application Keys per environment with access restricted to a single bucket β delete a compromised key without affecting other deployments
- Use presigned PUT URLs for user file uploads so large files go directly from browser to B2 without passing through your Replit server
- Pair your B2 bucket with Cloudflare CDN to eliminate egress costs β Backblaze's bandwidth alliance with Cloudflare makes outbound transfers free
- Use signature_version='s3v4' when configuring boto3 for B2 β B2 requires V4 signing and may reject requests signed with older methods
- Validate content type and sanitize filenames before generating presigned URLs to prevent arbitrary file types and path traversal attempts
- Deploy as Autoscale for typical upload/download APIs since file operations are stateless; choose Reserved VM only for continuous background file-processing jobs
Alternatives
AWS S3 has a vastly larger ecosystem of tools, services, and integrations, but costs roughly 4x more per GB than Backblaze B2 β choose S3 if you need tight AWS service integration.
Wasabi is also S3-compatible with no egress fees at a flat $7/TB/month, making it competitive with B2 for high-egress workloads where you are not using Cloudflare.
Dropbox is better for user-facing file sync and sharing scenarios where your users need to access files from their Dropbox account, rather than application-level object storage.
Frequently asked questions
How do I connect Replit to Backblaze B2?
Install the AWS SDK v3 (B2 is S3-compatible), store your B2 Application Key ID and applicationKey in Replit Secrets (lock icon π), then create an S3Client with the B2 endpoint URL, credentials, and forcePathStyle: true. From there, all standard S3 operations work against your B2 bucket.
Does Replit work with Backblaze B2?
Yes. Backblaze B2 exposes an S3-compatible API, which means the AWS SDK v3 and boto3 both work with B2 out of the box. You just configure them to point at the B2 endpoint instead of AWS. The AWS SDK auto-discovers nothing from the environment for B2, so you pass credentials explicitly in the client constructor.
How do I store my Backblaze B2 API key in Replit?
Click the lock icon (π) in the Replit sidebar to open Secrets. Add B2_KEY_ID (the Application Key ID), B2_APPLICATION_KEY (the applicationKey string), B2_BUCKET_NAME, B2_ENDPOINT, and B2_REGION as separate secrets. Never paste these values into your code files β Replit's Secret Scanner will flag them if you do.
Can I use Backblaze B2 with Replit for free?
Backblaze B2 offers 10GB of free storage and 1GB/day of free download bandwidth. This is sufficient for development and small applications. Storage beyond 10GB costs $0.006/GB/month. Downloads are free when routed through Cloudflare CDN via the bandwidth alliance.
What is the difference between Backblaze B2 and AWS S3?
Both offer S3-compatible object storage, but Backblaze B2 costs roughly 75% less per GB ($0.006 vs $0.023/GB/month). B2 has a smaller ecosystem and fewer advanced features (no Lambda triggers, no multi-region replication), but for most Replit web apps that just need reliable file storage and delivery, B2 is a more cost-effective choice.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation