Use Upstash Redis in Bolt.new — not ioredis or the native redis package, which require TCP sockets that fail in WebContainers. Upstash provides an HTTP-based Redis client (@upstash/redis) that works identically in development and production. Create a free Upstash database, add your REST URL and token to .env, and implement caching, session storage, or rate limiting with a familiar Redis API.
Add Redis Caching and Sessions to Bolt.new with Upstash
Redis is the industry standard for caching expensive database queries, storing user sessions, implementing rate limiting, and maintaining real-time leaderboards or counters. It is an in-memory data store with microsecond response times, and it is the first tool engineers reach for when an application's database queries become a performance bottleneck.
However, there is a critical constraint when using Redis in Bolt.new: the standard redis and ioredis npm packages communicate via TCP sockets on port 6379. Bolt.new's WebContainer is a browser-based Node.js runtime that cannot open raw TCP connections to external servers. Installing these packages works fine, but any attempt to connect to Redis at runtime throws a connection error. This catches many developers by surprise — the error appears at runtime, not at install time.
Upstash Redis is the purpose-built solution. Upstash wraps Redis in a serverless REST API and provides the @upstash/redis client, which communicates over HTTPS using standard fetch calls. This works in any environment that supports fetch: Bolt.new's WebContainer, Vercel Edge Functions, Cloudflare Workers, and browser environments. The API is identical to the standard Redis API — SET, GET, HSET, ZADD, EXPIRE, all the commands you know — just with HTTP under the hood instead of TCP. You get a generous free tier (10,000 commands/day), zero infrastructure to manage, and global replication for low-latency access from any deployment region.
Integration method
The native redis and ioredis npm packages use TCP sockets which are not available in Bolt.new's WebContainer runtime. Upstash Redis solves this by providing @upstash/redis, an HTTP/REST-based client that uses standard fetch calls instead of TCP connections — making it the only Redis option that works during Bolt development without any workarounds. After deploying to Netlify or Vercel, you can continue using Upstash or switch to ioredis for a traditional Redis connection if you have a self-hosted Redis instance.
Prerequisites
- An Upstash account at upstash.com — the free tier includes 10,000 commands/day with no credit card required
- An Upstash Redis database created in the Upstash Console with your REST URL and REST Token copied
- A Bolt.new account with a Next.js project open
- Basic familiarity with Redis data types: strings, hashes, sorted sets, and TTL expiry
- A Netlify account for deployment if you need persistent Redis connections for long-running processes
Step-by-step guide
Create an Upstash Redis database and configure the client
Create an Upstash Redis database and configure the client
Upstash is a serverless Redis provider that offers an HTTP-based client specifically designed for environments where TCP connections are unavailable — including Bolt.new's WebContainer, Vercel Edge Functions, and Cloudflare Workers. Setting up is quick and the free tier is generous enough for development and small production workloads. Create your database: go to console.upstash.com and sign in with GitHub or Google. Click Create Database. Choose a name (e.g., 'my-bolt-app-cache'), select the region closest to your deployment target (if using Netlify with a US region, pick us-east-1 or us-west-1), and choose the free tier. Click Create. The database provisions in seconds. After creation, you land on the database detail page. You need two values from the REST API section: the UPSTASH_REDIS_REST_URL (looks like https://us1-example-12345.upstash.io) and the UPSTASH_REDIS_REST_TOKEN (a long JWT-like string). Copy both. These are the only credentials you need — Upstash does not use a separate username/password for the HTTP client. Add both to your .env file in Bolt. Do not use NEXT_PUBLIC_ prefix — these are server-side only credentials. The REST URL is not a sensitive value (it is just a URL), but the token is effectively a password and must never be committed to Git or exposed in client-side code. The @upstash/redis package installation is straightforward: it is pure JavaScript with no native modules, so it installs without issues in Bolt's WebContainer and is ready to use immediately after install.
Set up Upstash Redis in my Next.js project. Install @upstash/redis. Create a .env file with UPSTASH_REDIS_REST_URL=https://your-db.upstash.io and UPSTASH_REDIS_REST_TOKEN=your-token-here. Create a lib/redis.ts file that imports Redis from @upstash/redis and exports a singleton redis client initialized with the URL and token from process.env. Add a check that throws a descriptive error if either env var is missing. Export the client as the default export and also export it as a named export 'redis'.
Paste this in Bolt.new chat
1// lib/redis.ts2import { Redis } from '@upstash/redis';34const url = process.env.UPSTASH_REDIS_REST_URL;5const token = process.env.UPSTASH_REDIS_REST_TOKEN;67if (!url) {8 throw new Error(9 'UPSTASH_REDIS_REST_URL is not set. ' +10 'Get it from console.upstash.com → your database → REST API section.'11 );12}13if (!token) {14 throw new Error(15 'UPSTASH_REDIS_REST_TOKEN is not set. ' +16 'Get it from console.upstash.com → your database → REST API section.'17 );18}1920export const redis = new Redis({ url, token });21export default redis;2223// Type-safe cache helpers24export async function getCached<T>(key: string): Promise<T | null> {25 return redis.get<T>(key);26}2728export async function setCached<T>(29 key: string,30 value: T,31 ttlSeconds?: number32): Promise<void> {33 if (ttlSeconds) {34 await redis.set(key, value, { ex: ttlSeconds });35 } else {36 await redis.set(key, value);37 }38}3940export async function deleteCached(key: string): Promise<void> {41 await redis.del(key);42}Pro tip: Upstash offers a Data Browser in their console where you can inspect, add, and delete keys in real time. Use it during development to verify your caching logic is working correctly — you can see exactly what keys are stored, their values, and their remaining TTL.
Expected result: A configured Upstash Redis client in lib/redis.ts with environment variables in .env, ready to use in API routes for caching and data storage.
Implement API response caching to reduce database load
Implement API response caching to reduce database load
Caching is the most impactful use of Redis in most web applications. The pattern is simple: before executing an expensive operation (database query, external API call, complex computation), check if a cached result exists in Redis. If it does, return it immediately. If not, execute the operation, store the result in Redis with a TTL, and return it. The TTL (Time to Live) is a key design decision. It represents how stale your data can be. For product listings on an e-commerce site, 60 seconds might be acceptable — a new product appearing up to a minute late is fine. For real-time stock prices, you might want 5 seconds or no caching at all. For rarely-changing data like navigation menus or configuration, 3600 seconds (1 hour) is appropriate. Cache keys should be predictable, unique, and human-readable. A good naming convention is `resource:identifier:variant` — for example, `products:all:v1`, `user:123:profile`, `search:query:typescript:page:2`. Including a version suffix (`:v1`) is useful when you change the data structure — you can bust the entire cache by incrementing the version without deleting keys manually. For Next.js API routes, add the cache check near the top of the handler, before any database calls. The pattern is so common that extracting it into a `withCache` higher-order function pays off quickly — you can wrap any async function to get automatic caching with a single line of code change. Cache invalidation (deleting or updating cached values when the underlying data changes) is the harder problem. For write operations (creating or updating a product), delete the corresponding cache keys as part of the write transaction so the next read fetches fresh data.
Add Redis caching to my products API route. In app/api/products/route.ts, import the redis client from lib/redis.ts. At the start of the GET handler, check for a cached response with key 'products:list:v1' using redis.get(). If found, return it with a Cache-Status: HIT header. If not, fetch products from Supabase (or your existing data source), serialize to JSON, store in Redis with redis.set('products:list:v1', data, { ex: 60 }), and return with Cache-Status: MISS header. Also add a POST handler that invalidates the cache when a new product is created, by calling redis.del('products:list:v1').
Paste this in Bolt.new chat
1// app/api/products/route.ts2import { NextRequest, NextResponse } from 'next/server';3import { redis } from '@/lib/redis';45const CACHE_KEY = 'products:list:v1';6const CACHE_TTL = 60; // seconds78export async function GET() {9 // Check cache first10 const cached = await redis.get<unknown[]>(CACHE_KEY);11 if (cached) {12 return NextResponse.json(cached, {13 headers: { 'Cache-Status': 'HIT', 'X-Cache-TTL': CACHE_TTL.toString() },14 });15 }1617 // Cache miss — fetch from database18 // Replace this with your actual data source (Supabase, etc.)19 const products = await fetchProductsFromDatabase();2021 // Store in Redis with TTL22 await redis.set(CACHE_KEY, products, { ex: CACHE_TTL });2324 return NextResponse.json(products, {25 headers: { 'Cache-Status': 'MISS' },26 });27}2829export async function POST(request: NextRequest) {30 const body = await request.json();3132 // Create the product in your database33 const newProduct = await createProductInDatabase(body);3435 // Invalidate the cached product list36 await redis.del(CACHE_KEY);3738 return NextResponse.json(newProduct, { status: 201 });39}4041// Placeholder — replace with your actual database query42async function fetchProductsFromDatabase() {43 return [{ id: 1, name: 'Example Product', price: 29.99 }];44}4546async function createProductInDatabase(data: unknown) {47 return { id: Date.now(), ...((data as object) || {}) };48}Pro tip: Use the Cache-Status response header to verify your caching layer is working correctly during development. Open the browser Network tab, refresh the page twice, and confirm the second request returns Cache-Status: HIT significantly faster than the first. Response time dropping from 200ms to 5ms confirms Redis is serving the cached response.
Expected result: API routes that serve cached responses from Redis on repeated requests, with cache invalidation triggered by write operations.
Build a sliding window rate limiter with Redis sorted sets
Build a sliding window rate limiter with Redis sorted sets
Rate limiting protects your API routes from abuse — brute-force login attempts, contact form spam, and excessive AI API usage. A sliding window rate limiter is more accurate than a fixed window approach because it tracks requests in a rolling time window rather than resetting at fixed intervals (which can allow 2x the rate at window boundaries). The sliding window algorithm uses Redis sorted sets. Each element in the set is a unique request ID (UUID or timestamp + random), and the score is the Unix timestamp of the request. To count requests in the last N seconds: remove all elements with scores older than `now - windowSeconds`, then count the remaining elements. If the count is below the limit, add the new request to the set (with an expiry on the key to avoid storing stale sets forever) and allow it. If not, reject it with a 429 response. For identifying clients, use the request's IP address. In Next.js with Netlify, the real client IP is in the `x-forwarded-for` or `x-nf-client-connection-ip` header — read it from the request headers. For authenticated routes, use the user ID instead of IP for more accurate per-user limiting. The rate limit configuration should be externalizable — different routes need different limits. A login endpoint might allow 5 attempts per minute, a contact form 3 per minute, and an AI generation endpoint 10 per hour. Build the rateLimit function to accept these parameters so you can configure each route appropriately. Always include `Retry-After`, `X-RateLimit-Limit`, `X-RateLimit-Remaining`, and `X-RateLimit-Reset` headers in rate-limited responses. These headers let client-side code display helpful messages like 'Please wait 45 seconds before trying again' rather than a generic error.
Create a reusable rate limiter at lib/rate-limit.ts using Upstash Redis sorted sets and the sliding window algorithm. The rateLimit function should accept: identifier (string — IP or userId), limit (number), windowSeconds (number). Use MULTI/EXEC or pipeline with: ZREMRANGEBYSCORE (remove old entries), ZCARD (count remaining), ZADD (add current request with timestamp score), EXPIRE (auto-cleanup). Return { allowed: boolean, remaining: number, resetAt: number }. Then apply it to app/api/contact/route.ts: limit 5 requests per 60 seconds per IP. Return 429 with Retry-After header when exceeded.
Paste this in Bolt.new chat
1// lib/rate-limit.ts2import { redis } from './redis';34interface RateLimitResult {5 allowed: boolean;6 remaining: number;7 limit: number;8 resetAt: number;9}1011export async function rateLimit(12 identifier: string,13 limit: number,14 windowSeconds: number15): Promise<RateLimitResult> {16 const now = Date.now();17 const windowStart = now - windowSeconds * 1000;18 const key = `rate_limit:${identifier}`;1920 // Use a pipeline for atomic operations21 const pipeline = redis.pipeline();22 pipeline.zremrangebyscore(key, 0, windowStart); // Remove expired entries23 pipeline.zcard(key); // Count current entries24 pipeline.zadd(key, { score: now, member: `${now}-${Math.random()}` }); // Add this request25 pipeline.expire(key, windowSeconds * 2); // Auto-cleanup key2627 const results = await pipeline.exec();28 const currentCount = (results[1] as number) ?? 0;2930 const allowed = currentCount < limit;31 const remaining = Math.max(0, limit - currentCount - 1);32 const resetAt = Math.ceil((now + windowSeconds * 1000) / 1000);3334 return { allowed, remaining, limit, resetAt };35}3637// Helper to get client IP from Next.js request38export function getClientIp(request: Request): string {39 const forwarded = request.headers.get('x-forwarded-for');40 const nfIp = request.headers.get('x-nf-client-connection-ip');41 return nfIp ?? (forwarded ? forwarded.split(',')[0].trim() : 'unknown');42}Pro tip: For development testing, temporarily lower the rate limit to 2 requests per minute so you can easily trigger and verify the 429 response without waiting. Restore production limits before deploying.
Expected result: A reusable rate limiting utility that uses Redis sorted sets, protecting API routes from abuse with proper Retry-After headers on rate-limited responses.
Deploy to Netlify with Upstash Redis in production
Deploy to Netlify with Upstash Redis in production
Upstash Redis works identically in development (Bolt.new WebContainer) and production (Netlify, Vercel) because both environments use the same HTTP-based @upstash/redis client. The underlying transport is always HTTPS fetch calls — there is no TCP socket to configure differently between environments. This is the key advantage over traditional Redis setups: with ioredis, you would need to configure different connection strings for local, staging, and production, and handle connection pooling. With Upstash, the same two environment variables work everywhere. To deploy: click Deploy in Bolt.new and connect to Netlify. After deployment, go to Netlify Dashboard → Site Configuration → Environment Variables and add UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN with the same values from your Upstash console. Trigger a redeploy. For production, consider creating a separate Upstash database for each environment (development, staging, production) — this prevents development testing from polluting your production cache or consuming your production command quota. Upstash's free tier allows multiple databases. Monitor your Redis usage in the Upstash console: it shows command counts, data transfer, and latency graphs. If you approach the free tier limit of 10,000 commands/day, review your caching strategy — you may be calling Redis more frequently than necessary, or your TTLs are too short, causing excessive cache misses. Note: Upstash Redis is the right choice for serverless deployments (Netlify Functions, Vercel Functions) even after deploying. If you migrate to a long-running server (Railway, Render, Fly.io), you can switch to ioredis with a standard Redis URL — ioredis works in any environment with TCP access.
Prepare my Bolt.new app for Netlify deployment with Upstash Redis. Create a netlify.toml file with the Next.js runtime configuration. Add a README-style comment in lib/redis.ts explaining that UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN must be added as environment variables in the Netlify dashboard (Site Configuration → Environment Variables) after deployment. Create a health check API route at app/api/health/route.ts that calls redis.ping() and returns { status: 'ok', redis: 'connected' } or { status: 'error', redis: error.message } to verify the Redis connection is working after deployment.
Paste this in Bolt.new chat
1// app/api/health/route.ts2import { NextResponse } from 'next/server';3import { redis } from '@/lib/redis';45export async function GET() {6 try {7 const result = await redis.ping();8 return NextResponse.json({9 status: 'ok',10 redis: result === 'PONG' ? 'connected' : 'unexpected response',11 timestamp: new Date().toISOString(),12 });13 } catch (error) {14 return NextResponse.json(15 {16 status: 'error',17 redis: error instanceof Error ? error.message : 'unknown error',18 timestamp: new Date().toISOString(),19 },20 { status: 503 }21 );22 }23}Pro tip: After deploying, visit /api/health in your browser to confirm Redis is connected in production. If it returns an error, check that both UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN are set correctly in Netlify's environment variables. A missing or incorrect token is the most common cause of connection failures after deployment.
Expected result: A deployed Netlify app with Upstash Redis working in production, verified by a /api/health endpoint that confirms the Redis connection is active.
Common use cases
API Response Cache to Reduce Database Costs
Cache expensive Supabase or external API responses in Redis with a TTL. Instead of querying the database on every page load, serve the cached result for 60 seconds and only requery when the cache expires. This dramatically reduces database usage and improves response times for read-heavy pages.
Add Redis caching to my product listing API route. Create a lib/redis.ts that initializes @upstash/redis with UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN from process.env. In my existing app/api/products/route.ts, check Redis for a cached response first (key: 'products:list'). If found, return it immediately. If not, fetch from Supabase, store the result in Redis with a 60-second TTL using SET with EX option, and return it. Add a Cache-Status header to the response ('HIT' or 'MISS') so I can verify caching in the browser dev tools.
Copy this prompt to try it in Bolt.new
Rate Limiter for API Routes
Protect sensitive API routes (login, contact form, AI calls) from abuse by implementing a sliding window rate limiter. Track request counts per IP address in Redis with a rolling time window — if a client sends too many requests, return 429 Too Many Requests until the window resets.
Build a reusable rate limiter using Upstash Redis. Create a lib/rate-limit.ts file with a rateLimit function that accepts an identifier (IP address or user ID), a limit (max requests), and a window (seconds). Use Redis ZADD and ZCOUNT with sorted sets to implement a sliding window algorithm. Return { success: boolean, remaining: number, resetAt: number }. Apply it to my app/api/contact/route.ts — limit to 5 requests per minute per IP. Return a 429 response with Retry-After header when the limit is exceeded.
Copy this prompt to try it in Bolt.new
Real-Time Leaderboard with Sorted Sets
Build a game or competition leaderboard that updates in real time. Store player scores in a Redis sorted set, which automatically maintains ordering. Use ZADD to update scores and ZRANGE to fetch the top N players — much faster than querying and sorting a SQL table for every page load.
Create a leaderboard system using Upstash Redis sorted sets. Create an API route at app/api/leaderboard/route.ts that handles GET (return top 10 players using ZRANGE with REV and BYSCORE) and POST (add or update a score using ZADD with the player's userId and score). Store data in Redis key 'leaderboard:global'. Include the player's rank using ZREVRANK. Also create a React Leaderboard component that fetches and displays the top 10 with rank, player name, and score, auto-refreshing every 30 seconds.
Copy this prompt to try it in Bolt.new
Troubleshooting
Error: connect ECONNREFUSED or ERR_SOCKET_CONNECTION_TIMEOUT when using redis or ioredis package in Bolt.new
Cause: The native redis and ioredis packages use TCP sockets on port 6379. Bolt.new's WebContainer cannot open raw TCP connections to external servers — it can only use HTTP/HTTPS. This is a fundamental WebContainer limitation, not a configuration problem.
Solution: Switch to @upstash/redis instead of ioredis or redis. Upstash's HTTP-based client uses fetch instead of TCP sockets and works in WebContainers. Install @upstash/redis, create a free Upstash database at console.upstash.com, and replace your ioredis imports with the Upstash client. The Redis API commands (GET, SET, HGET, etc.) are identical.
1// Before (fails in WebContainer):2import Redis from 'ioredis';3const redis = new Redis(process.env.REDIS_URL!);45// After (works in WebContainer):6import { Redis } from '@upstash/redis';7const redis = new Redis({8 url: process.env.UPSTASH_REDIS_REST_URL!,9 token: process.env.UPSTASH_REDIS_REST_TOKEN!,10});Redis commands return null even though data was just set
Cause: Most often caused by key name mismatch between SET and GET calls, or by using different Redis client instances that point to different databases. Also common when TTL has expired between set and get during testing.
Solution: Use the Upstash Data Browser (console.upstash.com → your database → Data Browser) to verify keys exist and check their values and TTL. Ensure you are using the same key string in SET and GET. If using multiple Redis clients in your codebase, make sure they all import from the same lib/redis.ts singleton file rather than creating new Redis instances separately.
Free tier limit exceeded error from Upstash (10,000 commands/day)
Cause: Each Redis operation counts as one command — a pipeline with 4 operations counts as 4. If your app polls Redis frequently, uses short TTLs causing many cache misses, or has a high traffic volume, the free tier limit can be reached.
Solution: Review your caching strategy: increase TTL values to reduce cache refresh frequency, add conditional caching that skips Redis for low-cost operations, and use pipeline batching to combine multiple operations into fewer round trips. For production apps with real traffic, upgrade to Upstash Pay-As-You-Go ($0.20 per 100K commands).
UPSTASH_REDIS_REST_TOKEN is undefined after deploying to Netlify
Cause: Environment variables set in .env are never deployed to Netlify — .env files are gitignored and local-only. Netlify has its own environment variable store that must be configured separately.
Solution: Go to Netlify Dashboard → Site Configuration → Environment Variables → Add Variable. Add UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN with the same values from your Upstash console. After adding both, click Trigger Deploy → Deploy site to apply them. Netlify requires a redeploy for new environment variables to take effect.
Best practices
- Always use @upstash/redis in Bolt.new projects, not ioredis or the native redis package — only the HTTP-based Upstash client works in WebContainers and serverless functions
- Create a singleton Redis client in lib/redis.ts and import it everywhere — avoid creating multiple Redis instances which each use their own HTTP connection pool
- Use descriptive, namespaced cache keys in the format resource:identifier:version (e.g., 'products:list:v1') to make keys easy to identify and invalidate
- Always set a TTL on cached data — unbounded Redis keys grow indefinitely and can fill your Upstash storage quota unexpectedly
- Increment the version suffix in cache keys (v1 → v2) when you change the shape of cached data to avoid serving stale data with an incompatible structure
- Use Redis pipelines for operations that require multiple commands (like rate limiting) to reduce latency and command count against the Upstash free tier
- Create separate Upstash databases for development and production environments to prevent dev testing from consuming production command quotas
- Monitor your daily command count in the Upstash console — approaching 10,000 commands/day on the free tier is a sign to optimize TTLs or upgrade your plan
Alternatives
MongoDB Atlas is a document database for persistent CRUD storage — use it instead of Redis when you need to store and query structured data long-term rather than ephemeral cache entries or session data.
PostgreSQL is a relational database for complex queries and persistent data — choose it over Redis when your caching needs are simple and you want to reduce infrastructure dependencies by using your existing database.
MySQL is another relational database option — like PostgreSQL, choose it for persistent storage and complex queries rather than the high-speed in-memory caching and session management Redis excels at.
DynamoDB is a serverless NoSQL database that can handle caching use cases with TTL support — choose it if you are already in the AWS ecosystem and want a single managed service instead of a separate Redis layer.
Frequently asked questions
Why can't I use ioredis or the redis package in Bolt.new?
Both ioredis and the native redis package communicate with Redis servers over TCP sockets on port 6379. Bolt.new's WebContainer is a browser-based Node.js runtime implemented in WebAssembly, and it does not support raw TCP connections to external servers. Only HTTP/HTTPS requests via the fetch API work. Upstash Redis provides @upstash/redis which uses HTTPS internally, making it the only Redis client that works in WebContainers.
Does Upstash Redis work the same as regular Redis?
Yes, from an API perspective. @upstash/redis supports all standard Redis commands: GET, SET, HGET, HSET, ZADD, ZRANGE, EXPIRE, DEL, INCR, LPUSH, and more. The difference is purely at the transport layer — Upstash uses HTTPS REST calls instead of TCP. Response formats and command semantics are identical to standard Redis. Performance is slightly higher latency than TCP Redis due to HTTP overhead, but for serverless workloads this difference is negligible.
Is Upstash Redis free?
Upstash offers a free tier with 10,000 commands per day, 256MB storage, and global replication. No credit card is required. For most development projects and small production apps, this is sufficient. Pay-As-You-Go is $0.20 per 100,000 commands beyond the free tier. You can create multiple free databases for separating development and production environments.
Can I switch from Upstash to self-hosted Redis after deploying?
Yes. After deploying to a server environment (Railway, Render, Fly.io) that supports TCP connections, you can switch to ioredis and point it at your self-hosted Redis instance. For Netlify and Vercel serverless functions, Upstash remains the better choice because serverless functions spin up fresh on each invocation and cannot maintain persistent TCP connections. Upstash's HTTP client is stateless by design.
How do I share Redis state between multiple API routes?
Import the redis singleton from lib/redis.ts in every API route that needs it. Since all routes import the same instance, they all connect to the same Upstash database and share the same keyspace. A key set in one API route is immediately readable by another. This is how session data, rate limit counters, and shared caches work across different endpoints.
What happens to my Redis data when I redeploy to Netlify?
Nothing — Upstash Redis is a separate managed service from your deployment environment. Your cached data, session keys, and rate limit counters persist independently of your Netlify deployments. Redeploying your app does not clear Redis data. If you need to flush all cache data after a deployment (for example, when deploying a breaking data structure change), either increment your cache key versions or call redis.flushdb() once after deployment.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation