You can import data into Supabase in three ways: upload a CSV file directly through the Dashboard Table Editor, use the JavaScript client to bulk-insert JSON rows, or run psql COPY for large datasets. The Dashboard CSV import is the fastest way to get started. For production-scale imports, psql COPY handles millions of rows efficiently by streaming data directly into PostgreSQL.
Three Ways to Import Data into Your Supabase Database
Whether you are migrating from another database, loading spreadsheet data, or seeding a new project, Supabase offers multiple import methods. This tutorial covers the Dashboard CSV importer for quick uploads, the JS client insert method for programmatic imports, and the psql COPY command for high-volume data loading. Each method has different trade-offs for speed, convenience, and data size.
Prerequisites
- A Supabase project with a target table created
- Data in CSV or JSON format ready to import
- For psql: the Supabase connection string from Dashboard → Settings → Database
- For JS client: @supabase/supabase-js installed in your project
Step-by-step guide
Import a CSV file through the Dashboard
Import a CSV file through the Dashboard
The simplest way to import data is through the Supabase Dashboard. Go to Table Editor, click the Import button (or create a new table from CSV). Select your CSV file and Supabase will auto-detect column types. You can map CSV headers to existing table columns or let Supabase create a new table with columns matching your CSV. This method works well for files up to a few thousand rows.
Expected result: Your CSV data appears in the Table Editor. Each row in the CSV becomes a row in the table.
Bulk-insert data with the JavaScript client
Bulk-insert data with the JavaScript client
For programmatic imports, use the Supabase JS client to insert an array of objects. The insert method accepts an array and sends them in a single request. For datasets larger than 1,000 rows, split the data into batches to avoid request timeouts and payload size limits. RLS policies apply to these inserts, so the authenticated user must have permission to insert.
1import { createClient } from '@supabase/supabase-js'23const supabase = createClient(4 process.env.SUPABASE_URL!,5 process.env.SUPABASE_SERVICE_ROLE_KEY! // Server-side only6)78// Single batch insert9const rows = [10 { name: 'Alice', email: 'alice@example.com', role: 'admin' },11 { name: 'Bob', email: 'bob@example.com', role: 'user' },12 { name: 'Carol', email: 'carol@example.com', role: 'user' },13]1415const { data, error } = await supabase16 .from('users')17 .insert(rows)18 .select()1920if (error) console.error('Insert failed:', error.message)2122// Batched insert for large datasets23const BATCH_SIZE = 50024for (let i = 0; i < largeDataset.length; i += BATCH_SIZE) {25 const batch = largeDataset.slice(i, i + BATCH_SIZE)26 const { error } = await supabase.from('users').insert(batch)27 if (error) {28 console.error(`Batch ${i / BATCH_SIZE} failed:`, error.message)29 break30 }31}Expected result: All rows are inserted into the table. The batched approach logs progress and stops on the first error.
Use psql COPY for large-scale imports
Use psql COPY for large-scale imports
For importing thousands to millions of rows, psql COPY is the fastest method. It streams data directly into PostgreSQL without going through the REST API. Get your connection string from Dashboard → Settings → Database → Connection string (URI). Use the COPY command with your CSV file. This bypasses RLS entirely since you connect as the postgres role.
1# Get your connection string from Dashboard → Settings → Database2# Format: postgresql://postgres.[ref]:[password]@[host]:5432/postgres34# Import CSV with headers5psql "postgresql://postgres.[ref]:[password]@db.[ref].supabase.co:5432/postgres" \6 -c "\COPY public.users (name, email, role) FROM '/path/to/data.csv' WITH (FORMAT csv, HEADER true)"78# For tab-delimited files9psql "postgresql://postgres.[ref]:[password]@db.[ref].supabase.co:5432/postgres" \10 -c "\COPY public.users FROM '/path/to/data.tsv' WITH (FORMAT csv, DELIMITER E'\t', HEADER true)"Expected result: The CSV data is loaded directly into PostgreSQL. psql reports the number of rows copied.
Handle duplicate data with upsert
Handle duplicate data with upsert
If your import might contain rows that already exist in the table, use upsert instead of insert. Upsert inserts new rows and updates existing ones based on a unique constraint. Specify the onConflict column to tell Supabase which column determines uniqueness. This is essential for re-running imports without creating duplicate entries.
1// Upsert: insert or update on conflict2const { data, error } = await supabase3 .from('users')4 .upsert(5 [6 { email: 'alice@example.com', name: 'Alice Updated', role: 'admin' },7 { email: 'dave@example.com', name: 'Dave', role: 'user' },8 ],9 { onConflict: 'email' }10 )11 .select()1213// The email column must have a UNIQUE constraint:14// ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE (email);Expected result: Existing rows are updated with new values. New rows are inserted. No duplicate key errors occur.
Verify imported data and check row counts
Verify imported data and check row counts
After importing, verify the data landed correctly. Use the SQL Editor in the Dashboard to run a count query and spot-check a few rows. Check for NULL values in required columns and verify that foreign key relationships are intact. For large imports, compare the row count in the database against the source file line count.
1-- Check total rows imported2SELECT count(*) FROM public.users;34-- Check for NULLs in required columns5SELECT count(*) FROM public.users WHERE name IS NULL OR email IS NULL;67-- Spot-check first 10 rows8SELECT * FROM public.users ORDER BY created_at DESC LIMIT 10;Expected result: The row count matches your source data. No unexpected NULL values or missing rows.
Complete working example
1import { createClient } from '@supabase/supabase-js'2import { readFileSync } from 'fs'3import { parse } from 'csv-parse/sync'45// Use service role key for server-side imports (bypasses RLS)6const supabase = createClient(7 process.env.SUPABASE_URL!,8 process.env.SUPABASE_SERVICE_ROLE_KEY!9)1011interface UserRow {12 name: string13 email: string14 role: string15}1617async function importCSV(filePath: string, tableName: string) {18 // 1. Read and parse CSV19 const fileContent = readFileSync(filePath, 'utf-8')20 const records: UserRow[] = parse(fileContent, {21 columns: true,22 skip_empty_lines: true,23 trim: true,24 })2526 console.log(`Parsed ${records.length} rows from ${filePath}`)2728 // 2. Insert in batches29 const BATCH_SIZE = 50030 let inserted = 03132 for (let i = 0; i < records.length; i += BATCH_SIZE) {33 const batch = records.slice(i, i + BATCH_SIZE)3435 const { error } = await supabase36 .from(tableName)37 .upsert(batch, { onConflict: 'email' })3839 if (error) {40 console.error(`Batch ${Math.floor(i / BATCH_SIZE) + 1} failed:`, error.message)41 break42 }4344 inserted += batch.length45 console.log(`Imported ${inserted} / ${records.length} rows`)46 }4748 // 3. Verify49 const { count } = await supabase50 .from(tableName)51 .select('*', { count: 'exact', head: true })5253 console.log(`Total rows in ${tableName}: ${count}`)54}5556importCSV('./data/users.csv', 'users')Common mistakes when importing Data into Supabase
Why it's a problem: Using the anon key for server-side import scripts, causing RLS to block inserts
How to avoid: Use SUPABASE_SERVICE_ROLE_KEY for server-side scripts that need to bypass RLS. The anon key respects RLS and requires matching policies for every insert.
Why it's a problem: Sending all rows in a single insert request, causing timeouts on large datasets
How to avoid: Split imports into batches of 500-1,000 rows. This avoids request payload limits and keeps each request within timeout bounds.
Why it's a problem: Using the pooler connection string (port 6543) for psql COPY operations
How to avoid: Use the direct connection string on port 5432. Connection pooling via Supavisor does not support the COPY protocol.
Best practices
- Use the Dashboard CSV import for quick one-time uploads of small datasets (under 5,000 rows)
- Use the service role key for server-side import scripts and never expose it in client code
- Split large imports into batches of 500-1,000 rows to avoid timeouts and payload limits
- Use upsert with onConflict to make imports idempotent and safe to re-run
- Always verify row counts and check for NULL values after completing an import
- Use psql COPY for imports exceeding 10,000 rows — it is orders of magnitude faster than the REST API
- Disable triggers and indexes before massive imports, then re-enable them afterward to speed up the process
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I have a CSV file with 50,000 rows of user data (name, email, role columns). Walk me through the fastest way to import this into a Supabase table, handling duplicates by email and verifying the import was successful.
Write a Node.js script that reads a CSV file, parses it, and imports the data into a Supabase table in batches of 500 rows using upsert. Use the service role key and log progress after each batch.
Frequently asked questions
What is the maximum CSV file size I can import through the Dashboard?
The Dashboard CSV importer handles files up to approximately 100MB. For larger files, use psql COPY which has no practical file size limit since it streams data directly to PostgreSQL.
Does the CSV import respect RLS policies?
The Dashboard CSV import runs as the postgres role, which bypasses RLS. The JS client insert respects RLS unless you use the service role key. psql COPY also bypasses RLS since it connects as the postgres user.
How do I import data with foreign key relationships?
Import the parent table first, then import the child table. Make sure the foreign key values in the child data match existing primary keys in the parent table. If they do not match, PostgreSQL will reject the rows with a foreign key violation error.
Can I import JSON data instead of CSV?
The Dashboard only supports CSV. For JSON data, use the JS client to parse the JSON array and pass it directly to the insert or upsert method. Each object in the array becomes a row.
How do I handle date formats in CSV imports?
PostgreSQL accepts ISO 8601 format (2024-01-15T10:30:00Z) natively. If your CSV uses a different format like MM/DD/YYYY, convert it to ISO 8601 before importing or use a staging table with text columns and transform with SQL.
Can RapidDev help with complex data migrations to Supabase?
Yes. RapidDev can plan and execute data migrations from any source into Supabase, including schema mapping, data transformation, handling foreign key relationships, and verifying data integrity.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation