Skip to main content
RapidDev - Software Development Agency
supabase-tutorial

How to Increase Query Speed in Supabase

To speed up queries in Supabase, start by adding indexes on columns used in WHERE clauses, JOIN conditions, and RLS policies. Use EXPLAIN ANALYZE in the SQL Editor to identify slow queries and missing indexes. Enable connection pooling via Supavisor for serverless workloads, select only the columns you need instead of using select('*'), and wrap auth.uid() calls in RLS policies with a SELECT subquery to enable per-statement caching.

What you'll learn

  • How to use EXPLAIN ANALYZE to identify slow queries and missing indexes
  • How to create the right type of index (btree, GIN, GiST) for your query patterns
  • How to optimize RLS policies to avoid per-row function evaluation
  • How to configure connection pooling for high-concurrency workloads
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner9 min read15-20 minSupabase (all plans), PostgreSQL 15+, @supabase/supabase-js v2+March 2026RapidDev Engineering Team
TL;DR

To speed up queries in Supabase, start by adding indexes on columns used in WHERE clauses, JOIN conditions, and RLS policies. Use EXPLAIN ANALYZE in the SQL Editor to identify slow queries and missing indexes. Enable connection pooling via Supavisor for serverless workloads, select only the columns you need instead of using select('*'), and wrap auth.uid() calls in RLS policies with a SELECT subquery to enable per-statement caching.

Optimizing Query Performance in Supabase

Supabase is powered by PostgreSQL, which means you have access to the full range of PostgreSQL performance optimization tools. Slow queries in Supabase usually come from missing indexes, inefficient RLS policies that evaluate per-row instead of per-statement, selecting too many columns, or connection exhaustion in serverless environments. This tutorial covers the practical techniques that make the biggest impact on query speed.

Prerequisites

  • A Supabase project with tables containing data
  • Access to the Supabase Dashboard SQL Editor
  • Basic understanding of SQL queries and indexes
  • The Supabase JS client for testing query performance

Step-by-step guide

1

Identify slow queries with EXPLAIN ANALYZE

The first step in optimization is measurement. Use EXPLAIN ANALYZE in the SQL Editor to see how PostgreSQL executes a query, including the execution plan, time per step, and whether indexes are being used. Look for Sequential Scan on large tables — this means PostgreSQL is reading every row instead of using an index. The actual time shows milliseconds for each operation.

typescript
1-- Run EXPLAIN ANALYZE on your slow query
2EXPLAIN ANALYZE
3SELECT * FROM orders
4WHERE user_id = 'some-uuid'
5AND status = 'pending'
6ORDER BY created_at DESC
7LIMIT 20;
8
9-- Look for:
10-- Seq Scan (bad for large tables needs an index)
11-- Index Scan (good using an index)
12-- actual time= (milliseconds per operation)
13-- rows= (how many rows were processed)

Expected result: EXPLAIN ANALYZE shows the execution plan. Sequential Scans on large tables indicate where indexes are needed.

2

Add indexes on frequently filtered columns

Create B-tree indexes on columns that appear in WHERE clauses, ORDER BY, and JOIN conditions. For composite queries that filter on multiple columns, create a multi-column index. The order of columns in a composite index matters — put the most selective column first. For columns used in RLS policies (like user_id), an index is especially critical because the policy is evaluated for every query.

typescript
1-- Single column index for user_id lookups
2CREATE INDEX idx_orders_user_id
3ON orders USING btree (user_id);
4
5-- Composite index for multi-column filters
6CREATE INDEX idx_orders_user_status
7ON orders USING btree (user_id, status);
8
9-- Index for sorting (ORDER BY)
10CREATE INDEX idx_orders_created_at
11ON orders USING btree (created_at DESC);
12
13-- Covering index (includes all selected columns to avoid table lookup)
14CREATE INDEX idx_orders_user_covering
15ON orders USING btree (user_id)
16INCLUDE (status, total, created_at);
17
18-- Partial index (only indexes rows matching a condition)
19CREATE INDEX idx_orders_pending
20ON orders USING btree (user_id)
21WHERE status = 'pending';

Expected result: After adding indexes, re-run EXPLAIN ANALYZE. Sequential Scans should change to Index Scans, and execution time should decrease.

3

Optimize RLS policies for performance

RLS policies are evaluated for every query, so poorly written policies can severely impact performance. The biggest optimization is wrapping function calls like auth.uid() in a SELECT subquery. This tells PostgreSQL to evaluate the function once per statement and cache the result, instead of evaluating it for every row. Also avoid JOINs in policies — use IN or EXISTS with subqueries instead.

typescript
1-- SLOW: auth.uid() evaluated per-row
2CREATE POLICY "slow_policy" ON orders FOR SELECT
3TO authenticated
4USING (auth.uid() = user_id);
5
6-- FAST: (SELECT auth.uid()) cached per-statement
7CREATE POLICY "fast_policy" ON orders FOR SELECT
8TO authenticated
9USING ((SELECT auth.uid()) = user_id);
10
11-- SLOW: JOIN in policy
12CREATE POLICY "slow_team_policy" ON documents FOR SELECT
13TO authenticated
14USING (
15 EXISTS (
16 SELECT 1 FROM team_members
17 WHERE team_members.user_id = auth.uid()
18 AND team_members.team_id = documents.team_id
19 )
20);
21
22-- FASTER: Subquery with cached auth.uid()
23CREATE POLICY "fast_team_policy" ON documents FOR SELECT
24TO authenticated
25USING (
26 team_id IN (
27 SELECT team_id FROM team_members
28 WHERE user_id = (SELECT auth.uid())
29 )
30);

Expected result: Queries with RLS are noticeably faster. The policy evaluation overhead drops from O(n) to O(1) for the auth function call.

4

Select only the columns you need

Using select('*') in the JS client fetches every column from the table, which increases data transfer time, memory usage, and can prevent PostgreSQL from using covering indexes. Specify only the columns you need. For related data, use the nested select syntax to fetch joined data in a single query instead of making multiple round trips.

typescript
1import { createClient } from '@supabase/supabase-js';
2
3const supabase = createClient(
4 process.env.NEXT_PUBLIC_SUPABASE_URL!,
5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
6);
7
8// SLOW: Fetches all columns
9const { data } = await supabase.from('orders').select('*');
10
11// FAST: Fetches only needed columns
12const { data: optimized } = await supabase
13 .from('orders')
14 .select('id, status, total, created_at');
15
16// FAST: Fetch related data in one query (no extra round trip)
17const { data: withItems } = await supabase
18 .from('orders')
19 .select('id, total, order_items(product_name, quantity)')
20 .eq('user_id', userId)
21 .order('created_at', { ascending: false })
22 .limit(20);

Expected result: Queries return faster with smaller payloads. Network transfer time decreases proportionally to the data reduction.

5

Enable connection pooling for serverless workloads

Serverless functions create new database connections on every invocation, which can quickly exhaust the connection limit. Supabase provides Supavisor, a connection pooler, accessible on port 6543 in transaction mode. Use the pooled connection string for serverless environments (Vercel, Netlify, Edge Functions) and the direct connection string for long-running processes.

typescript
1// For serverless environments (Vercel, Netlify, etc.)
2// Use the pooled connection string (port 6543)
3// Found in Dashboard → Settings → Database → Connection string → Transaction mode
4
5// .env for serverless
6DATABASE_URL="postgresql://postgres.xxxx:password@aws-0-us-east-1.pooler.supabase.com:6543/postgres"
7
8// .env for long-running processes (direct connection)
9DATABASE_URL="postgresql://postgres.xxxx:password@aws-0-us-east-1.pooler.supabase.com:5432/postgres"
10
11// For Prisma with connection pooling, add pgbouncer=true
12// DATABASE_URL="postgresql://...@...pooler.supabase.com:6543/postgres?pgbouncer=true"

Expected result: Serverless functions share database connections through the pooler instead of each creating a new connection.

6

Use specialized indexes for full-text search and JSON queries

B-tree indexes work for equality and range comparisons, but other query patterns need specialized index types. Use GIN indexes for full-text search (tsvector columns), JSONB containment queries, and array operations. Use GiST indexes for geometric and range data types. These indexes can dramatically speed up queries that B-tree cannot optimize.

typescript
1-- GIN index for full-text search
2CREATE INDEX idx_articles_fts
3ON articles USING gin (to_tsvector('english', title || ' ' || body));
4
5-- GIN index for JSONB containment queries
6CREATE INDEX idx_products_metadata
7ON products USING gin (metadata);
8
9-- Usage: Find products with specific metadata
10SELECT * FROM products
11WHERE metadata @> '{"category": "electronics"}';
12
13-- GIN index for array columns
14CREATE INDEX idx_posts_tags
15ON posts USING gin (tags);
16
17-- Usage: Find posts with a specific tag
18SELECT * FROM posts
19WHERE tags @> ARRAY['supabase'];

Expected result: Full-text search and JSONB queries use the GIN index instead of sequential scans, reducing query time from seconds to milliseconds on large tables.

Complete working example

performance-optimization.sql
1-- Supabase Query Performance Optimization Script
2-- Run these in the SQL Editor to optimize your database
3
4-- 1. Find tables missing indexes on frequently queried columns
5SELECT
6 schemaname,
7 relname AS table_name,
8 seq_scan,
9 seq_tup_read,
10 idx_scan,
11 n_live_tup AS row_count
12FROM pg_stat_user_tables
13WHERE seq_scan > 100
14 AND n_live_tup > 1000
15 AND idx_scan < seq_scan
16ORDER BY seq_tup_read DESC;
17
18-- 2. Create indexes for common query patterns
19CREATE INDEX IF NOT EXISTS idx_orders_user_id
20ON orders USING btree (user_id);
21
22CREATE INDEX IF NOT EXISTS idx_orders_status_created
23ON orders USING btree (status, created_at DESC);
24
25CREATE INDEX IF NOT EXISTS idx_profiles_user_id
26ON profiles USING btree (id);
27
28-- 3. Optimize RLS policies (drop slow, create fast)
29-- DROP POLICY IF EXISTS "slow_policy" ON orders;
30CREATE POLICY "optimized_select" ON orders FOR SELECT
31TO authenticated
32USING ((SELECT auth.uid()) = user_id);
33
34-- 4. Check index usage statistics
35SELECT
36 indexrelname AS index_name,
37 relname AS table_name,
38 idx_scan AS times_used,
39 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
40FROM pg_stat_user_indexes
41ORDER BY idx_scan DESC;
42
43-- 5. Find unused indexes (candidates for removal)
44SELECT
45 indexrelname AS index_name,
46 relname AS table_name,
47 idx_scan AS times_used,
48 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
49FROM pg_stat_user_indexes
50WHERE idx_scan = 0
51 AND indexrelname NOT LIKE '%pkey%'
52ORDER BY pg_relation_size(indexrelid) DESC;
53
54-- 6. Analyze tables to update statistics for the query planner
55ANALYZE orders;
56ANALYZE profiles;

Common mistakes when increasing Query Speed in Supabase

Why it's a problem: Using auth.uid() directly in RLS policies instead of wrapping it in (SELECT auth.uid())

How to avoid: Always use (SELECT auth.uid()) in RLS policies. The SELECT wrapper enables per-statement caching, avoiding re-evaluation for every row.

Why it's a problem: Creating too many indexes, which slows down INSERT, UPDATE, and DELETE operations

How to avoid: Only index columns that appear in frequent WHERE, ORDER BY, and JOIN clauses. Check pg_stat_user_indexes to find and remove unused indexes.

Why it's a problem: Using select('*') in the JS client when only a few columns are needed

How to avoid: Specify only the columns you need: select('id, name, status'). This reduces data transfer and can enable covering index usage.

Why it's a problem: Using direct database connections in serverless functions instead of the connection pooler

How to avoid: Use the pooled connection string (port 6543) for serverless workloads to share connections and avoid hitting the max connections limit.

Best practices

  • Run EXPLAIN ANALYZE on slow queries before making optimization changes to establish a baseline
  • Add btree indexes on columns used in WHERE clauses, especially user_id columns referenced in RLS policies
  • Always use (SELECT auth.uid()) instead of auth.uid() in RLS policies for per-statement caching
  • Select only the columns you need instead of using select('*')
  • Use the pooled connection string for serverless environments and direct connections for long-running processes
  • Run ANALYZE on tables after bulk data changes to update the query planner's statistics
  • Monitor the Dashboard Query Performance page regularly to catch new slow queries
  • Use partial indexes for queries that frequently filter on a specific value (WHERE status = 'active')

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My Supabase queries are slow on a table with 100,000+ rows. The table has RLS enabled with policies using auth.uid(). Walk me through using EXPLAIN ANALYZE, creating the right indexes, optimizing my RLS policies, and checking connection pooling configuration.

Supabase Prompt

Analyze my Supabase database for performance issues. Find tables with high sequential scan counts, suggest indexes based on common query patterns, and generate optimized RLS policies that use SELECT-wrapped function calls for per-statement caching.

Frequently asked questions

How do I know which columns to index?

Index columns that appear in WHERE clauses, ORDER BY, JOIN conditions, and RLS policies. Use EXPLAIN ANALYZE to confirm a query uses Sequential Scan on a large table — that column needs an index.

Do indexes slow down writes?

Yes, every index adds a small overhead to INSERT, UPDATE, and DELETE operations because the index must be updated too. The tradeoff is almost always worth it for read-heavy workloads, but avoid creating unnecessary indexes.

What is the difference between btree and GIN indexes?

B-tree indexes are best for equality (=), range (<, >), and sorting queries. GIN indexes are best for full-text search, JSONB containment (@>), and array operations. Use the index type that matches your query pattern.

Why are my queries fast in the SQL Editor but slow from the API?

The SQL Editor runs as the postgres superuser which bypasses RLS. API queries run as anon or authenticated, which must evaluate RLS policies. Optimize your RLS policies by wrapping function calls in SELECT subqueries and adding indexes on policy columns.

How many connections does my Supabase plan allow?

The free plan allows 60 direct connections, Pro allows 200, and higher plans scale further. Using the connection pooler (port 6543) lets you handle many more concurrent clients by sharing a smaller pool of actual database connections.

Can RapidDev help optimize my Supabase database performance?

Yes. RapidDev can audit your database schema, query patterns, and RLS policies to identify performance bottlenecks, create optimized indexes, and configure connection pooling for your specific workload.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.