Secure Supabase Storage files by using private buckets, writing RLS policies on the storage.objects table, and generating signed URLs for temporary access. Private buckets deny all unauthenticated access by default. Add row-level security policies that scope file access to the owning user by matching the file path to auth.uid(), and use createSignedUrl() to share time-limited download links.
Securing Files in Supabase Storage with Private Buckets, RLS, and Signed URLs
Supabase Storage uses the same Row Level Security system as your database tables, applied to the storage.objects table. This tutorial shows you how to create private buckets, write RLS policies that restrict file access to authenticated users or specific file owners, and generate signed URLs for time-limited sharing. You will build a secure file upload system where each user can only access their own files.
Prerequisites
- A Supabase project with authentication configured
- Access to the SQL Editor in the Supabase Dashboard
- @supabase/supabase-js v2 installed in your project
- Basic understanding of Row Level Security concepts
Step-by-step guide
Create a private storage bucket
Create a private storage bucket
In the Supabase Dashboard, go to Storage and click New Bucket. Name it documents and leave the Public bucket toggle OFF. A private bucket denies all unauthenticated access by default. Unlike public buckets where anyone with the URL can download files, private buckets require both authentication and an RLS policy to allow any operation. You can also create the bucket via SQL or the JS client.
1-- Create a private bucket via SQL2insert into storage.buckets (id, name, public)3values ('documents', 'documents', false);45-- Or via the JS client (server-side only, requires service role key)6const { data, error } = await supabase.storage.createBucket('documents', {7 public: false,8 fileSizeLimit: 10485760 // 10MB9});Expected result: A private bucket named 'documents' appears in the Storage section of the Dashboard with the public access indicator set to off.
Write RLS policies for user-scoped file access
Write RLS policies for user-scoped file access
Storage files are stored in the storage.objects table, and you write RLS policies on this table just like any other. The key pattern is to use a user-scoped folder structure where each user's files are stored under a path prefixed with their user ID. The storage.foldername() function extracts folder segments from the file path, and you compare the first segment to auth.uid() to ensure users can only access their own files.
1-- Users can upload files to their own folder2create policy "Users upload own files"3on storage.objects for insert4to authenticated5with check (6 bucket_id = 'documents'7 and (select auth.uid())::text = (storage.foldername(name))[1]8);910-- Users can read their own files11create policy "Users read own files"12on storage.objects for select13to authenticated14using (15 bucket_id = 'documents'16 and (select auth.uid())::text = (storage.foldername(name))[1]17);1819-- Users can delete their own files20create policy "Users delete own files"21on storage.objects for delete22to authenticated23using (24 bucket_id = 'documents'25 and (select auth.uid())::text = (storage.foldername(name))[1]26);2728-- Users can update (overwrite) their own files29create policy "Users update own files"30on storage.objects for update31to authenticated32using (33 bucket_id = 'documents'34 and (select auth.uid())::text = (storage.foldername(name))[1]35);Expected result: RLS policies are active on storage.objects. Authenticated users can only upload, read, update, and delete files under their own user ID folder.
Upload files to the user-scoped folder from the frontend
Upload files to the user-scoped folder from the frontend
When uploading from the client, construct the file path using the authenticated user's ID as the first folder segment. The Supabase JS client automatically includes the user's JWT in the request, which the RLS policy checks against the folder path. Use the upsert option if you want to allow overwriting existing files.
1import { createClient } from '@supabase/supabase-js'23const supabase = createClient(4 process.env.NEXT_PUBLIC_SUPABASE_URL!,5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!6)78async function uploadFile(file: File) {9 const { data: { user } } = await supabase.auth.getUser()10 if (!user) throw new Error('Not authenticated')1112 const filePath = `${user.id}/${file.name}`1314 const { data, error } = await supabase.storage15 .from('documents')16 .upload(filePath, file, {17 cacheControl: '3600',18 upsert: false19 })2021 if (error) throw error22 return data23}Expected result: The file is uploaded to documents/{user_id}/filename.ext. Any attempt to upload to another user's folder is blocked by the RLS policy.
Generate signed URLs for temporary file access
Generate signed URLs for temporary file access
For private buckets, you cannot use getPublicUrl() because the bucket is not publicly accessible. Instead, use createSignedUrl() to generate a time-limited URL that grants temporary access to a specific file. Signed URLs are ideal for sharing files with external users, rendering private images in the browser, or creating download links that expire. The expiry time is in seconds.
1// Generate a signed URL valid for 1 hour (3600 seconds)2const { data, error } = await supabase.storage3 .from('documents')4 .createSignedUrl(`${user.id}/report.pdf`, 3600)56if (data) {7 console.log('Download link:', data.signedUrl)8}910// Generate signed URLs for multiple files at once11const { data: urls, error: urlError } = await supabase.storage12 .from('documents')13 .createSignedUrls([14 `${user.id}/report.pdf`,15 `${user.id}/invoice.pdf`16 ], 3600)Expected result: A signed URL is returned that grants temporary access to the file. After the expiry time, the URL stops working and returns a 400 error.
List files in a user's folder with proper RLS
List files in a user's folder with proper RLS
Use the list method to show users their uploaded files. Because RLS is active, the list operation only returns files the user's policy allows them to see. Pass the user's ID as the path prefix to scope the listing to their folder. You can add pagination with limit and offset options for large file collections.
1async function listUserFiles() {2 const { data: { user } } = await supabase.auth.getUser()3 if (!user) throw new Error('Not authenticated')45 const { data: files, error } = await supabase.storage6 .from('documents')7 .list(user.id, {8 limit: 100,9 offset: 0,10 sortBy: { column: 'created_at', order: 'desc' }11 })1213 if (error) throw error14 return files15}Expected result: An array of file objects is returned, showing only the files in the authenticated user's folder. Other users' files are invisible.
Complete working example
1import { createClient } from '@supabase/supabase-js'23const supabase = createClient(4 process.env.NEXT_PUBLIC_SUPABASE_URL!,5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!6)78// Upload a file to the authenticated user's private folder9export async function uploadFile(file: File) {10 const { data: { user } } = await supabase.auth.getUser()11 if (!user) throw new Error('Not authenticated')1213 const filePath = `${user.id}/${file.name}`14 const { data, error } = await supabase.storage15 .from('documents')16 .upload(filePath, file, { cacheControl: '3600', upsert: false })1718 if (error) throw error19 return data20}2122// List all files in the authenticated user's folder23export async function listUserFiles() {24 const { data: { user } } = await supabase.auth.getUser()25 if (!user) throw new Error('Not authenticated')2627 const { data, error } = await supabase.storage28 .from('documents')29 .list(user.id, { limit: 100, sortBy: { column: 'created_at', order: 'desc' } })3031 if (error) throw error32 return data33}3435// Generate a signed URL for temporary file access36export async function getSignedUrl(fileName: string, expiresIn = 3600) {37 const { data: { user } } = await supabase.auth.getUser()38 if (!user) throw new Error('Not authenticated')3940 const { data, error } = await supabase.storage41 .from('documents')42 .createSignedUrl(`${user.id}/${fileName}`, expiresIn)4344 if (error) throw error45 return data.signedUrl46}4748// Delete a file from the authenticated user's folder49export async function deleteFile(fileName: string) {50 const { data: { user } } = await supabase.auth.getUser()51 if (!user) throw new Error('Not authenticated')5253 const { data, error } = await supabase.storage54 .from('documents')55 .remove([`${user.id}/${fileName}`])5657 if (error) throw error58 return data59}Common mistakes when securing Supabase Storage Files
Why it's a problem: Using a public bucket when files should be restricted to authenticated users
How to avoid: Create the bucket with public: false. Public buckets allow anyone with the URL to download files, bypassing all access controls.
Why it's a problem: Uploading files without the user ID as the first folder segment, causing the RLS policy to block the operation
How to avoid: Always construct the file path as {user.id}/{filename}. The RLS policy checks storage.foldername(name)[1] against auth.uid().
Why it's a problem: Using getPublicUrl() on a private bucket and getting 400 errors
How to avoid: Private buckets do not support public URLs. Use createSignedUrl() with an expiry time instead.
Why it's a problem: Forgetting to add a SELECT RLS policy on storage.objects, causing list and download operations to return empty results
How to avoid: Add a SELECT policy alongside your INSERT policy. Without it, users can upload but cannot see or download their own files.
Best practices
- Always use private buckets for user-uploaded content and sensitive documents
- Scope file paths with the user's ID as the first folder segment for easy RLS policy enforcement
- Add RLS policies for all four operations: SELECT, INSERT, UPDATE, and DELETE on storage.objects
- Use createSignedUrl() with the shortest practical expiry time for sharing private files
- Set fileSizeLimit on the bucket to prevent excessively large uploads at the storage level
- Verify the user with getUser() before any storage operation instead of relying on getSession()
- Add cacheControl headers when uploading to improve CDN performance for frequently accessed files
- Never expose the SUPABASE_SERVICE_ROLE_KEY to the client — it bypasses all storage RLS policies
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need to secure file uploads in Supabase Storage so each user can only access their own files. Show me how to create a private bucket, write RLS policies on storage.objects using the user ID folder pattern, and generate signed URLs for temporary file sharing.
Set up a private Supabase Storage bucket called 'documents' with RLS policies on storage.objects that scope all operations to the authenticated user's folder path. Include TypeScript functions for upload, list, signed URL generation, and delete.
Frequently asked questions
What is the difference between a public and private bucket in Supabase?
A public bucket allows anyone with the file URL to download it without authentication. A private bucket requires both authentication and a passing RLS policy on storage.objects before any operation is allowed.
Can I make some files in a private bucket publicly accessible?
Not directly. A bucket is either public or private. To share specific files from a private bucket, generate signed URLs with createSignedUrl(). These URLs work for anyone but expire after the specified time.
How long can a signed URL last?
Signed URLs can last up to 7 days (604800 seconds). Set the shortest expiry that meets your needs for security. Common values are 300 seconds for image display and 86400 seconds for email download links.
Why do my storage uploads return a 403 error?
A 403 error means the RLS policy on storage.objects is blocking the operation. Check that you have an INSERT policy, the bucket_id matches, and the file path starts with the user's ID. Also verify the user is authenticated.
Does the service role key bypass storage RLS policies?
Yes. The SUPABASE_SERVICE_ROLE_KEY bypasses all RLS policies including those on storage.objects. Never use it in client-side code. It is intended for server-side admin operations only.
Can I restrict upload file types in Supabase Storage?
Supabase does not enforce file type restrictions at the bucket level. Validate file types in your client code before uploading, or write an Edge Function that checks the content-type header before storing the file.
Can RapidDev help configure secure file storage for my Supabase project?
Yes. RapidDev can design your storage architecture, write RLS policies for complex access patterns, and implement secure upload and download flows tailored to your application.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation