To export data from Supabase, use pg_dump for a full database export via the command line, or run a SELECT query in the SQL Editor and download results as CSV. For programmatic exports, use the Supabase JS client to fetch data and convert it to your desired format. All three methods let you create portable backups or migrate data to another system.
Exporting Data from Your Supabase Database
Whether you need a backup, want to migrate to another platform, or need to share data with a team member, Supabase provides multiple ways to export your data. This tutorial covers three approaches: pg_dump for full database exports, the SQL Editor for quick CSV downloads, and the JavaScript client for programmatic exports. You will learn when to use each method and how to verify the exported data.
Prerequisites
- A Supabase project with data in at least one table
- Supabase CLI installed (for pg_dump method)
- Your project's database connection string from Dashboard > Settings > Database
- Node.js installed (for programmatic export)
Step-by-step guide
Find your database connection string
Find your database connection string
Go to the Supabase Dashboard and navigate to Settings > Database. Scroll to the Connection string section and copy the URI. This connection string includes your host, port, database name, and password. You will use this string with pg_dump and psql. Make sure to use the direct connection string (port 5432), not the pooled connection (port 6543), because pg_dump requires a direct connection to read the full schema.
1# Your connection string looks like this:2postgresql://postgres.[project-ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgresExpected result: You have copied the direct connection string from the Supabase Dashboard.
Export the full database with pg_dump
Export the full database with pg_dump
pg_dump is the standard PostgreSQL tool for creating a complete database export. It outputs SQL statements that can recreate your tables, data, functions, triggers, and RLS policies. Run the command in your terminal, replacing the connection string with your own. The --clean flag adds DROP statements so the dump can be restored cleanly. The -F c flag creates a custom-format archive that supports selective restore with pg_restore.
1# Full database export as SQL file2pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \3 --clean \4 --if-exists \5 --schema=public \6 -f export.sql78# Custom-format archive (supports selective restore)9pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \10 --clean \11 --if-exists \12 -F c \13 -f export.dumpExpected result: An export.sql or export.dump file is created in your current directory containing the full database schema and data.
Export query results as CSV from the SQL Editor
Export query results as CSV from the SQL Editor
For quick one-off exports, use the SQL Editor in the Supabase Dashboard. Navigate to SQL Editor in the left sidebar, write a SELECT query for the data you want to export, and click Run. After the results appear, click the Download button (arrow icon) above the results table to save the output as a CSV file. This method is ideal for exporting a single table or a filtered subset of data without needing any CLI tools.
1-- Export all rows from a specific table2SELECT * FROM products;34-- Export a filtered subset5SELECT id, name, email, created_at6FROM users7WHERE created_at >= '2025-01-01'8ORDER BY created_at DESC;910-- Export with a join11SELECT o.id, o.total, u.email12FROM orders o13JOIN users u ON o.user_id = u.id;Expected result: A CSV file downloads to your computer containing the query results.
Export data programmatically with the JavaScript client
Export data programmatically with the JavaScript client
For automated or recurring exports, use the Supabase JavaScript client to fetch data and write it to a file. This approach respects RLS policies, so the authenticated user will only export data they have access to. If you need to export all data regardless of RLS, use the service role key on the server side only. Install the dependencies, create a script, and run it with Node.js.
1import { createClient } from '@supabase/supabase-js'2import { writeFileSync } from 'fs'34const supabase = createClient(5 process.env.SUPABASE_URL,6 process.env.SUPABASE_SERVICE_ROLE_KEY7)89async function exportTable(tableName) {10 const allRows = []11 let offset = 012 const pageSize = 10001314 while (true) {15 const { data, error } = await supabase16 .from(tableName)17 .select('*')18 .range(offset, offset + pageSize - 1)1920 if (error) throw error21 if (!data || data.length === 0) break2223 allRows.push(...data)24 offset += pageSize25 }2627 writeFileSync(28 `${tableName}_export.json`,29 JSON.stringify(allRows, null, 2)30 )31 console.log(`Exported ${allRows.length} rows from ${tableName}`)32}3334await exportTable('products')Expected result: A JSON file is created containing all rows from the specified table.
Verify the exported data
Verify the exported data
After exporting, always verify the data is complete. Compare the row count in your export file against the count in the database. For SQL exports, you can also try restoring to a local Supabase instance using supabase db reset or pg_restore to confirm the dump is valid. For CSV and JSON exports, open the file and spot-check a few records against the Dashboard table view.
1-- Check row count in Supabase SQL Editor2SELECT count(*) FROM products;34-- For pg_dump verification, restore to local instance5supabase start6psql "postgresql://postgres:postgres@localhost:54322/postgres" < export.sql78-- Or with custom format9pg_restore -d "postgresql://postgres:postgres@localhost:54322/postgres" export.dumpExpected result: The row count in your export file matches the count in the database, confirming a complete export.
Complete working example
1import { createClient } from '@supabase/supabase-js'2import { writeFileSync } from 'fs'34// Use service role key for full access (server-side only!)5const supabase = createClient(6 process.env.SUPABASE_URL!,7 process.env.SUPABASE_SERVICE_ROLE_KEY!8)910interface ExportOptions {11 tableName: string12 format: 'json' | 'csv'13 filters?: Record<string, string>14}1516async function exportTable({ tableName, format, filters }: ExportOptions) {17 const allRows: Record<string, unknown>[] = []18 let offset = 019 const pageSize = 10002021 while (true) {22 let query = supabase.from(tableName).select('*')2324 if (filters) {25 for (const [key, value] of Object.entries(filters)) {26 query = query.eq(key, value)27 }28 }2930 const { data, error } = await query.range(offset, offset + pageSize - 1)31 if (error) throw new Error(`Export failed: ${error.message}`)32 if (!data || data.length === 0) break3334 allRows.push(...data)35 offset += pageSize36 }3738 const filename = `${tableName}_export.${format}`3940 if (format === 'csv') {41 const headers = Object.keys(allRows[0] || {}).join(',')42 const rows = allRows.map(row =>43 Object.values(row).map(v => `"${String(v ?? '')}"`).join(',')44 )45 writeFileSync(filename, [headers, ...rows].join('\n'))46 } else {47 writeFileSync(filename, JSON.stringify(allRows, null, 2))48 }4950 console.log(`Exported ${allRows.length} rows to ${filename}`)51 return allRows.length52}5354// Export all products as JSON55await exportTable({ tableName: 'products', format: 'json' })5657// Export active users as CSV58await exportTable({59 tableName: 'users',60 format: 'csv',61 filters: { is_active: 'true' }62})Common mistakes when exporting Data from Supabase
Why it's a problem: Using the pooled connection string (port 6543) with pg_dump instead of the direct connection (port 5432)
How to avoid: Always use the direct connection string from Dashboard > Settings > Database. pg_dump requires a direct PostgreSQL connection and will fail or produce incomplete results with a pooled connection.
Why it's a problem: Exporting with the anon key and getting empty results because RLS blocks access
How to avoid: For full data exports, use the service role key in a server-side script. The service role key bypasses RLS. Never use it in client-side code.
Why it's a problem: Not paginating large table exports, causing the request to time out
How to avoid: Use .range() to paginate in batches of 1,000 rows. Loop until no more rows are returned.
Why it's a problem: Forgetting to export RLS policies and triggers alongside table data
How to avoid: Use pg_dump without the --data-only flag so it includes schema definitions, RLS policies, functions, and triggers.
Best practices
- Use pg_dump with --schema=public to avoid exporting internal Supabase schemas
- Always verify exports by comparing row counts against the source database
- Store export scripts in version control so team members can reproduce the same export
- Use the service role key only in server-side scripts and never expose it in client code
- Schedule automated exports alongside Supabase's built-in daily backups for redundancy
- For large datasets, export in batches with pagination to avoid timeouts and memory issues
- Include a timestamp in export filenames to track when each export was created
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need to export all data from my Supabase project. Walk me through using pg_dump to create a full database backup, exporting specific tables as CSV from the SQL Editor, and writing a Node.js script that programmatically exports data using the Supabase JS client with pagination.
Write a Supabase Edge Function that exports a specified table as JSON, paginating through all rows using .range(), and returns the complete dataset as a downloadable file. Include proper error handling and CORS headers.
Frequently asked questions
Can I export data from the Supabase free plan?
Yes. All export methods work on the free plan. You can use pg_dump with your database connection string, download CSV from the SQL Editor, or use the JavaScript client. Automatic daily backups are only available on Pro plans and above, but manual exports work on all plans.
How do I export only specific tables from Supabase?
With pg_dump, use the -t flag followed by the table name: pg_dump -t products "your-connection-string" -f products.sql. In the SQL Editor, just write SELECT * FROM your_table and download the CSV. With the JS client, call supabase.from('your_table').select('*') in your export script.
Does exporting data affect my Supabase project's performance?
Large pg_dump exports can temporarily increase database load. For production databases, schedule exports during low-traffic periods. The JS client approach with pagination is gentler on the database because it fetches data in small batches.
Can I export Supabase data including RLS policies and functions?
Yes. pg_dump exports the full schema by default, including RLS policies, functions, triggers, and indexes. Add --schema=public to limit to your application schema. The SQL Editor CSV and JS client methods export data only, not schema objects.
How do I export data from Supabase to another PostgreSQL database?
Use pg_dump to create a dump file, then restore it with psql or pg_restore on the target database. For custom-format dumps: pg_restore -d target_connection_string export.dump. This preserves all schema objects and data.
What is the maximum amount of data I can export from Supabase?
There is no hard limit on pg_dump exports. For the JS client, paginate in batches of 1,000 rows to avoid timeouts. The SQL Editor CSV download is limited by browser memory, so use pg_dump or the JS client for tables with more than 100,000 rows.
Can RapidDev help with automating Supabase data exports?
Yes. RapidDev can build automated export pipelines that schedule regular database backups, export to cloud storage like S3, and send notifications when exports complete. This is especially useful for compliance and disaster recovery requirements.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation