Skip to main content
RapidDev - Software Development Agency
supabase-tutorial

How to Store JSON Data in Supabase

Supabase uses PostgreSQL's native jsonb column type to store JSON data. Create a column with the jsonb type, insert JSON objects directly using the Supabase JS client, and query nested fields with the -> and ->> operators in SQL or the arrow syntax in the JS client. Index frequently queried JSON paths with GIN indexes for fast lookups.

What you'll learn

  • How to create jsonb columns in Supabase tables
  • How to insert and update JSON data using the Supabase JS client
  • How to query nested JSON fields with arrow operators and filters
  • How to index JSON paths for faster queries
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner8 min read10-15 minSupabase (all plans), @supabase/supabase-js v2+, PostgreSQL 14+March 2026RapidDev Engineering Team
TL;DR

Supabase uses PostgreSQL's native jsonb column type to store JSON data. Create a column with the jsonb type, insert JSON objects directly using the Supabase JS client, and query nested fields with the -> and ->> operators in SQL or the arrow syntax in the JS client. Index frequently queried JSON paths with GIN indexes for fast lookups.

Storing and Querying JSON Data in Supabase

PostgreSQL's jsonb type lets you store structured JSON data alongside relational columns. This is useful for flexible schemas like user preferences, metadata, form responses, or any data that varies in structure between rows. This tutorial covers creating jsonb columns, inserting JSON data, querying nested fields, and adding indexes for performance.

Prerequisites

  • A Supabase project with access to the SQL Editor
  • Basic understanding of JSON structure
  • @supabase/supabase-js installed in your project

Step-by-step guide

1

Create a table with a jsonb column

Use the SQL Editor in the Supabase Dashboard to create a table with a jsonb column. The jsonb type stores JSON in a binary format that supports indexing and efficient querying. Always prefer jsonb over json — it is faster for reads, supports indexing, and removes duplicate keys. You can also add a jsonb column to an existing table with ALTER TABLE.

typescript
1-- Create a new table with a jsonb column
2create table public.products (
3 id bigint generated always as identity primary key,
4 name text not null,
5 metadata jsonb default '{}',
6 created_at timestamptz default now()
7);
8
9-- Enable RLS (required for any table accessed via the API)
10alter table public.products enable row level security;
11
12-- Allow authenticated users to read products
13create policy "Authenticated users can read products"
14 on public.products for select
15 to authenticated
16 using (true);

Expected result: A products table with a jsonb metadata column is created with RLS enabled.

2

Insert JSON data using the Supabase JS client

Pass a JavaScript object to the jsonb column and Supabase automatically serializes it to JSON. You can store nested objects, arrays, numbers, strings, and booleans. There is no need to call JSON.stringify — the client handles serialization. You can also insert multiple rows with different JSON structures in the same column since jsonb is schemaless.

typescript
1import { createClient } from '@supabase/supabase-js'
2
3const supabase = createClient(
4 process.env.NEXT_PUBLIC_SUPABASE_URL,
5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY
6)
7
8// Insert a product with JSON metadata
9const { data, error } = await supabase
10 .from('products')
11 .insert({
12 name: 'Wireless Keyboard',
13 metadata: {
14 brand: 'Logitech',
15 color: 'black',
16 weight_grams: 450,
17 features: ['bluetooth', 'rechargeable', 'backlit'],
18 dimensions: { width: 43, height: 2, depth: 13 }
19 }
20 })
21 .select()

Expected result: A new product row is inserted with the full JSON metadata object stored in the metadata column.

3

Update specific JSON fields without overwriting the entire object

To update individual fields inside a jsonb column without replacing the whole object, use the PostgreSQL jsonb concatenation operator (||) via an RPC function, or use the Supabase client's update method with a merged object. The simplest approach for the JS client is to read the current value, merge locally, and write back. For atomic updates, create a database function that uses the || operator.

typescript
1-- SQL function for atomic JSON field update
2create or replace function update_product_metadata(
3 product_id bigint,
4 new_fields jsonb
5)
6returns void
7language plpgsql
8security invoker
9as $$
10begin
11 update public.products
12 set metadata = metadata || new_fields
13 where id = product_id;
14end;
15$$;
16
17-- Call from JavaScript
18const { data, error } = await supabase.rpc('update_product_metadata', {
19 product_id: 1,
20 new_fields: { color: 'white', on_sale: true }
21})

Expected result: The metadata column is updated with the new fields merged in, preserving all existing fields that were not specified.

4

Query JSON fields using arrow operators

PostgreSQL provides two operators for accessing JSON fields: -> returns a JSON value and ->> returns a text value. Use -> for traversing nested objects and ->> for the final value you want to compare. In the Supabase JS client, you can filter on JSON fields using the arrow syntax in column names.

typescript
1-- SQL: Query products by a JSON field
2select name, metadata->>'brand' as brand
3from public.products
4where metadata->>'brand' = 'Logitech';
5
6-- SQL: Query nested JSON fields
7select name, metadata->'dimensions'->>'width' as width
8from public.products
9where (metadata->'dimensions'->>'width')::int > 40;
10
11-- JavaScript: Filter by JSON field
12const { data, error } = await supabase
13 .from('products')
14 .select('name, metadata->brand')
15 .eq('metadata->>brand', 'Logitech')
16
17// JavaScript: Check if JSON array contains a value
18const { data, error } = await supabase
19 .from('products')
20 .select('*')
21 .contains('metadata->features', '["bluetooth"]')

Expected result: Queries return only products matching the specified JSON field values, with nested fields accessible through chained arrow operators.

5

Add GIN indexes on frequently queried JSON paths

Without indexes, querying JSON fields requires a full table scan. PostgreSQL's GIN (Generalized Inverted Index) is optimized for jsonb and supports containment operators (@>, ?, ?&, ?|). Create a GIN index on the entire jsonb column for general queries, or create a btree index on a specific extracted path for exact-match queries on a single field.

typescript
1-- GIN index on the entire jsonb column (supports @> containment queries)
2create index idx_products_metadata on public.products using gin (metadata);
3
4-- Btree index on a specific JSON path (faster for exact-match queries)
5create index idx_products_brand on public.products
6 using btree ((metadata->>'brand'));
7
8-- Now this query uses the GIN index:
9select * from public.products
10where metadata @> '{"brand": "Logitech"}';
11
12-- And this query uses the btree index:
13select * from public.products
14where metadata->>'brand' = 'Logitech';

Expected result: Queries on indexed JSON paths use index scans instead of sequential scans, significantly improving performance on large tables.

Complete working example

json-data-example.ts
1// Complete example: Storing and querying JSON data in Supabase
2import { createClient } from '@supabase/supabase-js'
3
4const supabase = createClient(
5 process.env.NEXT_PUBLIC_SUPABASE_URL!,
6 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
7)
8
9// Insert a product with JSON metadata
10async function createProduct() {
11 const { data, error } = await supabase
12 .from('products')
13 .insert({
14 name: 'Wireless Keyboard',
15 metadata: {
16 brand: 'Logitech',
17 color: 'black',
18 weight_grams: 450,
19 features: ['bluetooth', 'rechargeable', 'backlit'],
20 dimensions: { width: 43, height: 2, depth: 13 }
21 }
22 })
23 .select()
24 return { data, error }
25}
26
27// Query products by JSON field
28async function getProductsByBrand(brand: string) {
29 const { data, error } = await supabase
30 .from('products')
31 .select('id, name, metadata->brand')
32 .eq('metadata->>brand', brand)
33 return { data, error }
34}
35
36// Update a JSON field atomically via RPC
37async function updateProductMetadata(
38 productId: number,
39 fields: Record<string, unknown>
40) {
41 const { data, error } = await supabase.rpc('update_product_metadata', {
42 product_id: productId,
43 new_fields: fields
44 })
45 return { data, error }
46}
47
48// Query products containing a specific feature
49async function getProductsWithFeature(feature: string) {
50 const { data, error } = await supabase
51 .from('products')
52 .select('*')
53 .contains('metadata->features', JSON.stringify([feature]))
54 return { data, error }
55}

Common mistakes when storing JSON Data in Supabase

Why it's a problem: Using the json type instead of jsonb, which does not support indexing or efficient queries

How to avoid: Always use jsonb for new columns. If you have existing json columns, migrate them: ALTER TABLE products ALTER COLUMN metadata TYPE jsonb USING metadata::jsonb;

Why it's a problem: Calling JSON.stringify() on objects before inserting them, resulting in double-encoded strings

How to avoid: Pass JavaScript objects directly to the Supabase client. It handles serialization automatically. JSON.stringify would store the data as a JSON string literal instead of a JSON object.

Why it's a problem: Not adding indexes on JSON paths used in WHERE clauses, causing slow queries on large tables

How to avoid: Add a GIN index for general containment queries or a btree index on the specific extracted path: CREATE INDEX idx_name ON table USING btree ((column->>'field'));

Best practices

  • Use jsonb instead of json for better performance, indexing support, and duplicate key handling
  • Set a default value of '{}' on jsonb columns to avoid null handling complexity
  • Use -> for traversing JSON objects and ->> for extracting the final text value for comparisons
  • Add GIN indexes on jsonb columns that are frequently queried with containment operators
  • Create database functions with the || operator for atomic JSON field updates instead of read-modify-write patterns
  • Keep JSON structures relatively flat — deeply nested data is harder to query, index, and validate
  • Consider using separate relational columns for fields that are always present and frequently filtered
  • Enable RLS on tables with jsonb columns and write policies that consider the JSON content if needed

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I have a Supabase table called products with a jsonb column called metadata. Show me how to insert JSON data, query by a nested field like metadata.brand, update a single field without overwriting the entire object, and add a GIN index for performance.

Supabase Prompt

Create a products table with a jsonb metadata column, enable RLS, and write a database function that uses the || operator to atomically merge new fields into the metadata column. Include a GIN index on the metadata column.

Frequently asked questions

What is the difference between json and jsonb in PostgreSQL?

The json type stores data as a text string and parses it on every access. The jsonb type stores data in a binary format that supports indexing, is faster for reads, and automatically removes duplicate keys. Always use jsonb in Supabase.

Is there a size limit for jsonb columns in Supabase?

PostgreSQL supports jsonb values up to 1 GB per field. However, very large JSON objects degrade query performance. For large data, consider using Supabase Storage for files and storing only references in the database.

Can I validate the structure of JSON data before inserting?

PostgreSQL does not enforce JSON schemas natively. You can use a check constraint with a database function to validate structure, or validate on the client side before inserting. Supabase Edge Functions are a good place for server-side validation.

How do I remove a key from a jsonb object?

Use the - operator in SQL: UPDATE products SET metadata = metadata - 'key_to_remove' WHERE id = 1; For nested keys, use jsonb_set with null or the #- operator for path-based removal.

Can I use jsonb with RLS policies?

Yes, you can reference jsonb fields in RLS policies. For example: USING (metadata->>'owner_id' = (select auth.uid())::text). However, this is slower than using a dedicated column for access control.

Should I use jsonb or separate columns for structured data?

Use separate columns for data that is always present, frequently filtered, and has a fixed schema. Use jsonb for optional metadata, user preferences, or data that varies between rows. Combining both approaches in the same table is common and recommended.

Can RapidDev help design my Supabase database schema with JSON columns?

Yes, RapidDev can help you design an optimal schema that balances relational columns for performance-critical data with jsonb columns for flexible metadata, including proper indexing and RLS policies.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.