Skip to main content
RapidDev - Software Development Agency
bolt-ai-integrationsBolt Chat + API Route

How to Integrate Bolt.new with AWS S3

To integrate AWS S3 with Bolt.new, install @aws-sdk/client-s3 (pure JavaScript, works in WebContainers) and generate pre-signed URLs through a Next.js API route or Supabase Edge Function. Store your AWS credentials in the .env file. The S3 client communicates over HTTPS, bypassing WebContainer's TCP limitation. Configure bucket CORS to allow StackBlitz origins for development.

What you'll learn

  • Why AWS S3 is the right solution for Bolt's ephemeral file system problem
  • How to configure S3 bucket CORS to allow uploads from Bolt's WebContainer origins
  • How to generate pre-signed URLs server-side for secure client uploads
  • How to build a file upload component in React that uses pre-signed URLs
  • How to set environment variables for both development and Netlify/Vercel deployment
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate14 min read20 minutesStorageApril 2026RapidDev Engineering Team
TL;DR

To integrate AWS S3 with Bolt.new, install @aws-sdk/client-s3 (pure JavaScript, works in WebContainers) and generate pre-signed URLs through a Next.js API route or Supabase Edge Function. Store your AWS credentials in the .env file. The S3 client communicates over HTTPS, bypassing WebContainer's TCP limitation. Configure bucket CORS to allow StackBlitz origins for development.

Solving Bolt's Ephemeral Storage Problem with AWS S3

Bolt.new runs entirely inside a browser tab using StackBlitz's WebContainer technology. The in-memory file system means any file uploaded to your app — profile photos, documents, generated assets — disappears the moment the user refreshes the page. AWS S3 solves this by providing persistent, durable object storage accessible over HTTPS, which is the only external protocol WebContainers support. Unlike raw TCP-based storage systems, S3's HTTP API works seamlessly from both the WebContainer during development and your deployed server.

The @aws-sdk/client-s3 package is written in pure JavaScript without any native C++ bindings, making it one of the few AWS services that works directly inside Bolt's runtime. The recommended integration pattern uses pre-signed URLs: your API route generates a time-limited signed URL that authorizes the browser to upload directly to S3, without routing the file data through your server. This reduces latency, saves bandwidth costs, and eliminates server-side file buffering — an especially important consideration when your API routes are serverless functions with limited memory.

For developers who also need a NoSQL database alongside file storage, the @aws-sdk/client-dynamodb package uses the same HTTP-based architecture and works in WebContainers as well. You can use DynamoDB to store metadata about S3 objects (file names, user associations, upload timestamps) while S3 handles the binary data itself. Both services share the same AWS credentials, simplifying your environment variable configuration.

Integration method

Bolt Chat + API Route

The @aws-sdk/client-s3 package is pure JavaScript and communicates exclusively over HTTPS, making it fully compatible with Bolt's WebContainer runtime. You create a server-side API route that generates pre-signed S3 URLs, then the client uses those URLs to upload files directly to S3 without exposing your AWS credentials. This pattern keeps secret keys server-side while enabling direct client-to-S3 uploads at scale.

Prerequisites

  • An AWS account with an IAM user that has S3 permissions (AmazonS3FullAccess for development, scoped policies for production)
  • An S3 bucket created in your desired AWS region with a name that matches your app
  • Your AWS Access Key ID and Secret Access Key from the IAM console
  • A Bolt.new project using Next.js (for API routes) or Vite with Supabase (for Edge Functions)
  • Basic familiarity with environment variables and API routes

Step-by-step guide

1

Create an S3 Bucket and Configure CORS

Before writing any code, you need an S3 bucket configured to accept uploads from Bolt's WebContainer URLs. Log into the AWS Console, navigate to S3, and create a new bucket. Choose a region close to your users and uncheck 'Block all public access' if you need publicly readable files (for images), or leave it checked for private documents. After creating the bucket, click on it, go to the Permissions tab, and scroll to Cross-origin resource sharing (CORS). You must add a CORS configuration that allows Bolt's WebContainer origins. StackBlitz uses URLs like `https://[hash].local.credentialless.webcontainer-api.io` during development and your deployed domain in production. The CORS rule should allow PUT and GET methods from all origins during development (`*`), then be tightened to your specific domain after deployment. Without this CORS configuration, browser-based uploads to S3 will be blocked by the browser's same-origin policy, even though the AWS credentials are valid.

s3-cors-config.json
1[
2 {
3 "AllowedHeaders": ["*"],
4 "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
5 "AllowedOrigins": ["*"],
6 "ExposeHeaders": ["ETag"],
7 "MaxAgeSeconds": 3000
8 }
9]

Pro tip: For production, replace the '*' in AllowedOrigins with your specific deployed domain (e.g., 'https://your-app.netlify.app') to prevent unauthorized uploads from other sites.

Expected result: Your S3 bucket shows 'CORS: Configured' in the Permissions tab, and you've saved your bucket name, region, Access Key ID, and Secret Access Key for the next step.

2

Install the AWS SDK and Add Environment Variables

The @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner packages are pure JavaScript with no native dependencies, making them fully compatible with Bolt's WebContainer. Prompt Bolt to install these packages and set up the environment variable structure. You'll store your AWS credentials in the .env file at the project root. In a Next.js project, server-side environment variables have no prefix (they're never sent to the browser), while client-safe variables use NEXT_PUBLIC_. Your AWS Secret Access Key must never have a NEXT_PUBLIC_ prefix — it must only be read by your API routes. The Access Key ID and bucket region can be exposed if needed, but it's cleaner to keep all AWS config server-side. After prompting Bolt, manually edit the .env file to replace the placeholder values with your real credentials.

Bolt.new Prompt

Install @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner packages. Create a .env file with these variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, and AWS_S3_BUCKET_NAME. Add a .env.example file with the same keys but empty values for documentation.

Paste this in Bolt.new chat

.env
1# .env file never commit this to git
2AWS_ACCESS_KEY_ID=your_access_key_here
3AWS_SECRET_ACCESS_KEY=your_secret_key_here
4AWS_REGION=us-east-1
5AWS_S3_BUCKET_NAME=your-bucket-name

Pro tip: Add .env to your .gitignore immediately to prevent accidentally committing AWS credentials. Bolt usually handles this automatically.

Expected result: The .env file exists in your project root with your real AWS credentials. The terminal shows no errors when running npm run dev.

3

Create the Pre-Signed URL API Route

The core of the S3 integration is a server-side API route that generates pre-signed URLs. A pre-signed URL is a time-limited URL that grants permission to perform a specific S3 operation (like uploading a file) without exposing your AWS credentials. The client sends the file's content type and desired filename to your API route, the route generates a signed URL using your secret credentials, and returns that URL to the client. The client then uploads directly to S3 using a simple PUT request. This approach has two major advantages: your AWS Secret Access Key never reaches the browser, and file data doesn't pass through your server (reducing costs and latency). The pre-signed URL expires after a configurable duration (typically 15 minutes), preventing abuse even if a URL is intercepted.

Bolt.new Prompt

Create a Next.js API route at app/api/upload-url/route.ts that generates S3 pre-signed upload URLs. The route accepts POST requests with a JSON body containing fileName and fileType. It should: 1) create an S3 client using AWS credentials from environment variables, 2) generate a pre-signed URL for a PutObjectCommand with 15-minute expiry, 3) return the signed URL and the final S3 object URL. Include proper error handling and return 500 if credentials are missing.

Paste this in Bolt.new chat

app/api/upload-url/route.ts
1import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
2import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
3import { NextResponse } from 'next/server';
4
5const s3Client = new S3Client({
6 region: process.env.AWS_REGION!,
7 credentials: {
8 accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
9 secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
10 },
11});
12
13export async function POST(request: Request) {
14 try {
15 const { fileName, fileType } = await request.json();
16
17 if (!fileName || !fileType) {
18 return NextResponse.json(
19 { error: 'fileName and fileType are required' },
20 { status: 400 }
21 );
22 }
23
24 // Create a unique key to avoid filename collisions
25 const key = `uploads/${Date.now()}-${fileName.replace(/[^a-zA-Z0-9.-]/g, '_')}`;
26
27 const command = new PutObjectCommand({
28 Bucket: process.env.AWS_S3_BUCKET_NAME!,
29 Key: key,
30 ContentType: fileType,
31 });
32
33 const signedUrl = await getSignedUrl(s3Client, command, {
34 expiresIn: 900, // 15 minutes
35 });
36
37 const objectUrl = `https://${process.env.AWS_S3_BUCKET_NAME}.s3.${process.env.AWS_REGION}.amazonaws.com/${key}`;
38
39 return NextResponse.json({ signedUrl, objectUrl, key });
40 } catch (error) {
41 console.error('Error generating signed URL:', error);
42 return NextResponse.json(
43 { error: 'Failed to generate upload URL' },
44 { status: 500 }
45 );
46 }
47}

Pro tip: The key sanitization (replacing special characters) prevents S3 key validation errors and URL encoding issues when files have spaces or special characters in their names.

Expected result: A POST request to /api/upload-url with {fileName: 'test.jpg', fileType: 'image/jpeg'} returns a JSON object containing signedUrl and objectUrl.

4

Build the File Upload React Component

Now prompt Bolt to create the upload UI component that uses the pre-signed URL. The upload flow has two HTTP requests: first to your API route to get the signed URL, then directly to S3 to upload the file. The second request goes browser-to-S3 using the signed URL — your server is not involved in the file transfer itself. The component should show upload progress using the XMLHttpRequest API (which supports progress events, unlike fetch), handle file size limits, display success and error states, and return the permanent S3 URL for saving to your database. This component works during Bolt development and in production without any changes.

Bolt.new Prompt

Create a React component called FileUpload that handles S3 file uploads. It should: show a styled dropzone area with drag-and-drop support, accept an 'accept' prop for file type filtering and a 'maxSizeMB' prop for size limits, display an upload progress bar, call /api/upload-url to get a pre-signed URL then upload directly to S3 with a PUT request, show success with the file URL and error messages if something fails, and accept an 'onUploadComplete' callback prop that receives the S3 object URL.

Paste this in Bolt.new chat

components/FileUpload.tsx
1import { useState, useRef } from 'react';
2
3interface FileUploadProps {
4 accept?: string;
5 maxSizeMB?: number;
6 onUploadComplete?: (url: string) => void;
7}
8
9export function FileUpload({
10 accept = '*/*',
11 maxSizeMB = 10,
12 onUploadComplete,
13}: FileUploadProps) {
14 const [uploading, setUploading] = useState(false);
15 const [progress, setProgress] = useState(0);
16 const [uploadedUrl, setUploadedUrl] = useState<string | null>(null);
17 const [error, setError] = useState<string | null>(null);
18 const fileInputRef = useRef<HTMLInputElement>(null);
19
20 const uploadFile = async (file: File) => {
21 if (file.size > maxSizeMB * 1024 * 1024) {
22 setError(`File must be smaller than ${maxSizeMB}MB`);
23 return;
24 }
25
26 setUploading(true);
27 setProgress(0);
28 setError(null);
29
30 try {
31 // Step 1: Get pre-signed URL from our API
32 const urlResponse = await fetch('/api/upload-url', {
33 method: 'POST',
34 headers: { 'Content-Type': 'application/json' },
35 body: JSON.stringify({ fileName: file.name, fileType: file.type }),
36 });
37
38 if (!urlResponse.ok) throw new Error('Failed to get upload URL');
39 const { signedUrl, objectUrl } = await urlResponse.json();
40
41 // Step 2: Upload directly to S3 with progress tracking
42 await new Promise<void>((resolve, reject) => {
43 const xhr = new XMLHttpRequest();
44 xhr.upload.onprogress = (e) => {
45 if (e.lengthComputable) {
46 setProgress(Math.round((e.loaded / e.total) * 100));
47 }
48 };
49 xhr.onload = () => {
50 if (xhr.status === 200) resolve();
51 else reject(new Error(`Upload failed: ${xhr.status}`));
52 };
53 xhr.onerror = () => reject(new Error('Upload failed'));
54 xhr.open('PUT', signedUrl);
55 xhr.setRequestHeader('Content-Type', file.type);
56 xhr.send(file);
57 });
58
59 setUploadedUrl(objectUrl);
60 onUploadComplete?.(objectUrl);
61 } catch (err) {
62 setError(err instanceof Error ? err.message : 'Upload failed');
63 } finally {
64 setUploading(false);
65 }
66 };
67
68 return (
69 <div className="border-2 border-dashed border-gray-300 rounded-lg p-6 text-center">
70 <input
71 ref={fileInputRef}
72 type="file"
73 accept={accept}
74 className="hidden"
75 onChange={(e) => e.target.files?.[0] && uploadFile(e.target.files[0])}
76 />
77 {!uploading && !uploadedUrl && (
78 <button
79 onClick={() => fileInputRef.current?.click()}
80 className="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700"
81 >
82 Choose File (max {maxSizeMB}MB)
83 </button>
84 )}
85 {uploading && (
86 <div>
87 <div className="bg-gray-200 rounded-full h-2 mt-2">
88 <div
89 className="bg-blue-600 h-2 rounded-full transition-all"
90 style={{ width: `${progress}%` }}
91 />
92 </div>
93 <p className="mt-1 text-sm text-gray-500">{progress}% uploaded</p>
94 </div>
95 )}
96 {uploadedUrl && (
97 <p className="text-green-600 text-sm">Upload complete!</p>
98 )}
99 {error && <p className="text-red-600 text-sm mt-2">{error}</p>}
100 </div>
101 );
102}

Expected result: A styled upload dropzone renders on the page. Selecting a file triggers the two-step upload process and shows a progress bar. After completion, the component displays a success message and fires the onUploadComplete callback with the S3 URL.

5

Deploy and Update Environment Variables

During development in Bolt's WebContainer, the API route at /api/upload-url runs server-side within the WebContainer runtime and reads credentials from your .env file. This works correctly for testing the upload flow. However, incoming webhooks from S3 (like S3 Event Notifications) cannot reach the WebContainer — Bolt's browser-based runtime has no public URL that S3 can call. If you need S3 event notifications (e.g., to trigger processing when a file is uploaded), you must deploy first. For Netlify deployment, push your code through Bolt's GitHub integration, then add your four AWS environment variables in Netlify's dashboard under Site Settings → Environment Variables. For Vercel, add them in Project Settings → Environment Variables. The API routes become serverless functions that have full access to the environment variables you configure. Remember that the CORS configuration on your S3 bucket should be updated to allow only your deployed domain rather than the wildcard (*) you used during development.

Pro tip: Test the full upload flow (component → API route → S3) in the Bolt preview first. If uploads work in the preview, they will definitely work in production since the same HTTPS-based SDK runs in both environments.

Expected result: Your deployed app on Netlify or Vercel successfully uploads files to S3. The environment variables panel in your hosting dashboard shows all four AWS variables. S3 event notifications (if needed) are registered with your deployed domain URL.

Common use cases

User Profile Photo Uploads

Allow users to upload profile photos that persist across sessions and devices. The photo uploads directly from the browser to S3 using a pre-signed URL, then the public S3 URL is saved to your database alongside the user record. Profile photos appear immediately and remain available indefinitely.

Bolt.new Prompt

Add a profile photo upload feature. When a user clicks 'Change Photo', show a file picker that accepts JPG and PNG under 5MB. Generate a pre-signed S3 upload URL from an API route at /api/upload-url, upload the file directly to S3 from the browser, then save the resulting S3 URL to the user's profile in the database. Show a loading spinner during upload and display the new photo immediately after success.

Copy this prompt to try it in Bolt.new

Document Management System

Build a document storage feature where users can upload PDFs, Word docs, and spreadsheets. S3 stores the actual files while your database tracks metadata like file name, size, upload date, and which user owns it. Generate signed download URLs on demand so only authorized users can access files.

Bolt.new Prompt

Create a document upload and management page. Users can upload files up to 50MB. Store files in S3 using pre-signed upload URLs generated by a /api/s3-upload-url API route. Save file metadata (name, size, S3 key, uploadedAt, userId) to the database. Show a file list with download buttons that generate fresh pre-signed download URLs from /api/s3-download-url. Include a delete button that removes the file from both S3 and the database.

Copy this prompt to try it in Bolt.new

Image Gallery with CDN Delivery

Create a public image gallery where uploaded images are served directly from S3 or through CloudFront CDN. This is ideal for portfolio sites, product catalogs, or any app needing fast global image delivery. Images upload via pre-signed URLs and display using their permanent S3 public URL.

Bolt.new Prompt

Build an image gallery where users can upload images that appear in a responsive grid. Use S3 for storage with pre-signed URLs for uploads. Make the S3 bucket serve images publicly so they load directly from S3 URLs without authentication. The gallery should show upload progress, support drag-and-drop, and let users delete images (which removes them from both S3 and the database).

Copy this prompt to try it in Bolt.new

Troubleshooting

CORS error when uploading: 'Access to XMLHttpRequest at S3 URL from origin has been blocked'

Cause: The S3 bucket's CORS configuration doesn't include the origin where your app is running. During development, Bolt uses WebContainer URLs like *.webcontainer-api.io that need to be allowed.

Solution: Go to your S3 bucket → Permissions → CORS configuration and add a rule with AllowedOrigins: ['*'] for development. After deployment, update it to your specific domain. Make sure AllowedMethods includes 'PUT' since pre-signed uploads use the PUT method.

typescript
1[
2 {
3 "AllowedHeaders": ["*"],
4 "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
5 "AllowedOrigins": ["*"],
6 "ExposeHeaders": ["ETag"],
7 "MaxAgeSeconds": 3000
8 }
9]

API route returns 500 with 'InvalidClientTokenId' or 'The security token included in the request is invalid'

Cause: The AWS Access Key ID or Secret Access Key in your .env file is incorrect, has been deactivated, or the environment variables aren't being read properly.

Solution: Verify your credentials in the AWS IAM console under Security credentials. Make sure your .env variable names exactly match what the code reads (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY). Restart the dev server after editing .env since Next.js caches environment variables at startup.

Pre-signed URL works but upload fails with '403 Forbidden' from S3

Cause: The pre-signed URL was generated for a specific ContentType, but the PUT request is sending a different or missing Content-Type header. S3 validates that the actual upload matches what was signed.

Solution: Ensure the xhr.setRequestHeader('Content-Type', file.type) line in your upload code sends exactly the same MIME type that was passed to the API route. If file.type is empty (some browsers don't detect file types), pass a fallback: file.type || 'application/octet-stream'.

typescript
1xhr.setRequestHeader('Content-Type', file.type || 'application/octet-stream');

S3 Event Notifications or webhooks never arrive during Bolt development

Cause: Bolt's WebContainer runtime runs inside a browser tab and has no public IP address. S3 cannot make outbound HTTP calls to your WebContainer — it has no externally reachable URL to send events to.

Solution: This is a fundamental WebContainer limitation. Deploy your app to Netlify or Bolt Cloud first, then register S3 Event Notification endpoints using your deployed URL (e.g., https://your-app.netlify.app/api/s3-webhook). Use the deployed environment for all webhook testing.

Best practices

  • Always generate pre-signed URLs server-side in an API route — never expose your AWS Secret Access Key to the browser or include it in client-side code
  • Set short pre-signed URL expiry times (5-15 minutes) to limit the window during which an intercepted URL could be misused
  • Sanitize file names before using them as S3 keys — replace spaces and special characters to avoid URL encoding issues and S3 key validation errors
  • Use unique key prefixes (e.g., include a timestamp or UUID) to prevent filename collisions when multiple users upload files with the same name
  • Restrict IAM permissions to the minimum necessary: the IAM user for your app only needs s3:PutObject, s3:GetObject, and s3:DeleteObject on your specific bucket
  • Configure S3 lifecycle rules to automatically delete temporary or unfinished upload files after a set period, reducing storage costs
  • After deployment, tighten your S3 bucket CORS configuration to only allow your specific production domain instead of the wildcard (*) used during development
  • Store only the S3 object key (not the full URL) in your database — construct URLs programmatically so you can change bucket regions without a database migration

Alternatives

Frequently asked questions

Does @aws-sdk/client-s3 work inside Bolt's WebContainer?

Yes. The @aws-sdk/client-s3 package is written in pure JavaScript and communicates exclusively over HTTPS, making it fully compatible with Bolt's WebContainer runtime. It does not use any native Node.js modules that require TCP sockets or C++ compilation. Both the S3 client and the pre-signer package install and run without issues.

Can I use the full AWS SDK v3 or just the S3 client?

Most AWS SDK v3 modular packages work in WebContainers because they're written in pure JavaScript and use HTTPS. The @aws-sdk/client-dynamodb package for DynamoDB also works, making it a viable NoSQL option alongside S3 storage. However, services requiring TCP connections or native binaries (like certain data streaming services) won't function in the WebContainer during development.

Why use pre-signed URLs instead of uploading through the API route?

Pre-signed URLs allow the browser to upload directly to S3, bypassing your server entirely. This means large files don't consume your serverless function's memory or execution time, uploads can be faster (direct connection to S3), and your server costs are lower. The signed URL proves the upload is authorized without exposing your AWS credentials to the client.

How do I receive S3 event notifications in my Bolt app?

S3 event notifications (triggered when files are uploaded, deleted, etc.) require a publicly accessible webhook URL that S3 can call. Bolt's WebContainer has no public URL during development, so you must deploy your app first. After deploying to Netlify or Bolt Cloud, register your /api/s3-webhook endpoint URL in the S3 bucket's Event Notifications settings under Properties.

Should I make my S3 bucket public or private?

It depends on your use case. For user-generated content like profile photos that display in your app, a public bucket with proper key naming is the simplest approach. For private documents, keep the bucket private and generate pre-signed download URLs (using GetObjectCommand) whenever a user needs to access a file. Never make a bucket containing sensitive documents publicly accessible.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.