Skip to main content
RapidDev - Software Development Agency
replit-tutorial

How to measure app performance in Replit

Replit provides a Resources panel that shows real-time CPU, RAM, and storage usage for your app. You can combine this with browser-based Lighthouse audits in the Preview pane and server-side timing APIs to measure and report on application performance. This tutorial covers reading the Resources panel, running Lighthouse in Preview DevTools, adding response time tracking to your Express server, and building a simple performance dashboard.

What you'll learn

  • Read CPU, RAM, and storage metrics from Replit's Resources panel
  • Run Lighthouse performance audits using Preview DevTools
  • Add response time tracking middleware to an Express server
  • Build a simple endpoint that reports aggregated performance metrics
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner9 min read20-30 minutesAll Replit plans. Resources panel available on all plans. PostgreSQL required for storing metrics (Core or Pro plan).March 2026RapidDev Engineering Team
TL;DR

Replit provides a Resources panel that shows real-time CPU, RAM, and storage usage for your app. You can combine this with browser-based Lighthouse audits in the Preview pane and server-side timing APIs to measure and report on application performance. This tutorial covers reading the Resources panel, running Lighthouse in Preview DevTools, adding response time tracking to your Express server, and building a simple performance dashboard.

Measure and Report on App Performance in Replit

Understanding how your application performs is critical before scaling to real users. Replit gives you built-in tools for monitoring resource usage, and you can add lightweight performance tracking to your code for deeper insights. This tutorial shows you how to use the Resources panel for system metrics, run Lighthouse audits for frontend performance, measure API response times with server-side middleware, and store performance data for trend analysis.

Prerequisites

  • A Replit account with a running web application
  • An Express backend (for server-side metrics)
  • Basic familiarity with the Replit workspace (Preview, Shell, Console)
  • Optional: PostgreSQL database enabled for storing metrics over time

Step-by-step guide

1

Open the Resources panel to check system usage

Click the stacked computers icon in the left sidebar to open the Resources panel. This shows three real-time metrics: CPU usage (percentage of allocated vCPU), RAM usage (current versus allocated memory), and Storage usage (disk space consumed versus quota). Check these while your app is running under normal load. If CPU or RAM consistently stays above 80 percent, your app may experience slowdowns or out-of-memory crashes. The Resources panel updates in real time and requires no code changes.

Expected result: The Resources panel displays live CPU, RAM, and storage usage for your running app.

2

Run a Lighthouse audit in the Preview pane

Open the Preview pane and load your application. Right-click inside the Preview and select 'Open DevTools' or press F12. In DevTools, navigate to the Lighthouse tab. Select the categories you want to audit: Performance, Accessibility, Best Practices, and SEO. Click 'Analyze page load.' Lighthouse runs a simulated audit and generates scores from 0 to 100 for each category. The Performance score highlights issues like large images, unused JavaScript, slow server response times, and missing caching headers. Focus on the Opportunities and Diagnostics sections for actionable fixes.

Expected result: Lighthouse generates a report with scores and specific recommendations for improving your app's performance.

3

Add response time tracking middleware to Express

Add middleware to your Express server that measures how long each API request takes to process. Record the start time when a request arrives and calculate the duration when the response finishes. Log the timing data to Console and optionally store it in your database for trend analysis. This gives you visibility into which endpoints are slow and helps you identify database queries or external API calls that need optimization.

typescript
1// server/middleware/timing.js
2export function timingMiddleware(req, res, next) {
3 const start = process.hrtime.bigint();
4
5 res.on('finish', () => {
6 const end = process.hrtime.bigint();
7 const durationMs = Number(end - start) / 1_000_000;
8
9 console.log(
10 `${req.method} ${req.path} ${res.statusCode} ${durationMs.toFixed(1)}ms`
11 );
12
13 // Flag slow requests
14 if (durationMs > 1000) {
15 console.warn(`SLOW REQUEST: ${req.method} ${req.path} took ${durationMs.toFixed(0)}ms`);
16 }
17 });
18
19 next();
20}
21
22// In server/index.js:
23import { timingMiddleware } from './middleware/timing.js';
24app.use(timingMiddleware);

Expected result: Every API request logs its method, path, status code, and duration in milliseconds to the Console.

4

Store performance metrics in PostgreSQL

To track performance trends over time, create a database table that stores metrics from your timing middleware. Insert a row for each request with the method, path, status code, duration, and timestamp. Add a database index on created_at for efficient querying. This creates a historical record that lets you see whether performance is improving or degrading as you add features. Be selective about what you store — logging every request on a high-traffic app could fill your database quickly.

typescript
1-- Create metrics table
2CREATE TABLE IF NOT EXISTS performance_metrics (
3 id SERIAL PRIMARY KEY,
4 method VARCHAR(10),
5 path VARCHAR(255),
6 status_code INTEGER,
7 duration_ms NUMERIC(10, 2),
8 created_at TIMESTAMP DEFAULT NOW()
9);
10
11CREATE INDEX idx_metrics_created ON performance_metrics(created_at);
12CREATE INDEX idx_metrics_path ON performance_metrics(path);

Expected result: The performance_metrics table stores timing data that you can query for trend analysis.

5

Build a performance report endpoint

Create a GET endpoint that aggregates performance data and returns a summary report. Calculate average, median, and p95 response times for each endpoint over a configurable time period. Protect the endpoint with an admin key stored in Replit Secrets. This report gives you a quick overview of application health and highlights endpoints that need optimization.

typescript
1// GET /api/performance?days=7
2router.get('/api/performance', async (req, res) => {
3 const adminKey = req.headers['x-admin-key'];
4 if (adminKey !== process.env.ADMIN_KEY) {
5 return res.status(403).json({ error: 'Unauthorized' });
6 }
7
8 const days = parseInt(req.query.days) || 7;
9
10 try {
11 const result = await pool.query(
12 `SELECT
13 path,
14 COUNT(*) as total_requests,
15 ROUND(AVG(duration_ms), 1) as avg_ms,
16 ROUND(PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY duration_ms), 1) as median_ms,
17 ROUND(PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration_ms), 1) as p95_ms,
18 MAX(duration_ms) as max_ms
19 FROM performance_metrics
20 WHERE created_at > NOW() - INTERVAL '1 day' * $1
21 GROUP BY path
22 ORDER BY avg_ms DESC`,
23 [days]
24 );
25
26 res.json({
27 period_days: days,
28 generated_at: new Date().toISOString(),
29 endpoints: result.rows
30 });
31 } catch (err) {
32 console.error('Performance report error:', err.message);
33 res.status(500).json({ error: 'Report generation failed' });
34 }
35});

Expected result: The endpoint returns average, median, p95, and max response times for each API route over the specified period.

6

Add frontend performance tracking with the Performance API

Use the browser's built-in Performance API to measure page load metrics on the client side. The PerformanceNavigationTiming interface provides metrics like DOM content loaded time, largest contentful paint, and total page load time. Send these metrics to your server for centralized tracking. Add this script to your main layout component so it runs on every page load.

typescript
1// src/utils/webVitals.ts
2export function reportWebVitals() {
3 if (typeof window === 'undefined') return;
4
5 window.addEventListener('load', () => {
6 setTimeout(() => {
7 const nav = performance.getEntriesByType('navigation')[0] as PerformanceNavigationTiming;
8 if (!nav) return;
9
10 const metrics = {
11 dns: Math.round(nav.domainLookupEnd - nav.domainLookupStart),
12 tcp: Math.round(nav.connectEnd - nav.connectStart),
13 ttfb: Math.round(nav.responseStart - nav.requestStart),
14 domLoad: Math.round(nav.domContentLoadedEventEnd - nav.fetchStart),
15 fullLoad: Math.round(nav.loadEventEnd - nav.fetchStart)
16 };
17
18 console.log('Page performance:', metrics);
19
20 // Send to backend
21 fetch('/api/events', {
22 method: 'POST',
23 headers: { 'Content-Type': 'application/json' },
24 body: JSON.stringify({
25 event_type: 'performance',
26 page: window.location.pathname,
27 metadata: metrics
28 })
29 }).catch(() => {});
30 }, 0);
31 });
32}

Expected result: Page load metrics are logged to the Console and sent to your backend for each page view.

Complete working example

server/middleware/timing.js
1// server/middleware/timing.js — Request timing and performance logging
2import pg from 'pg';
3
4const pool = new pg.Pool({
5 connectionString: process.env.DATABASE_URL
6});
7
8// Only log requests slower than this threshold to the database
9const DB_LOG_THRESHOLD_MS = 50;
10
11export function timingMiddleware(req, res, next) {
12 const start = process.hrtime.bigint();
13
14 res.on('finish', async () => {
15 const end = process.hrtime.bigint();
16 const durationMs = Number(end - start) / 1_000_000;
17
18 // Always log to console
19 const logLine = `${req.method} ${req.path} ${res.statusCode} ${durationMs.toFixed(1)}ms`;
20 if (durationMs > 1000) {
21 console.warn(`SLOW: ${logLine}`);
22 } else {
23 console.log(logLine);
24 }
25
26 // Store in database if above threshold
27 if (durationMs > DB_LOG_THRESHOLD_MS && req.path.startsWith('/api')) {
28 try {
29 await pool.query(
30 `INSERT INTO performance_metrics (method, path, status_code, duration_ms)
31 VALUES ($1, $2, $3, $4)`,
32 [req.method, req.path, res.statusCode, durationMs.toFixed(2)]
33 );
34 } catch (err) {
35 // Never let metrics logging crash the app
36 console.error('Metrics insert failed:', err.message);
37 }
38 }
39 });
40
41 next();
42}
43
44// Cleanup old metrics — call from a scheduled task
45export async function cleanupOldMetrics(retentionDays = 30) {
46 try {
47 const result = await pool.query(
48 `DELETE FROM performance_metrics WHERE created_at < NOW() - INTERVAL '1 day' * $1`,
49 [retentionDays]
50 );
51 console.log(`Cleaned up ${result.rowCount} old metrics`);
52 } catch (err) {
53 console.error('Metrics cleanup failed:', err.message);
54 }
55}

Common mistakes when measuring app performance in Replit

Why it's a problem: Only looking at average response times, which masks occasional very slow requests

How to avoid: Always include p95 (95th percentile) and max response times in your reports. The p95 shows the worst experience that 1 in 20 users encounters.

Why it's a problem: Logging every single request to the database on a high-traffic app, quickly filling storage

How to avoid: Set a duration threshold (for example, 50ms or 100ms) and only log requests that exceed it. Or sample a percentage of all requests.

Why it's a problem: Running Lighthouse in Replit's Preview pane and treating scores as production-accurate

How to avoid: The Preview pane runs in a sandboxed environment that does not match production. Deploy your app and run Lighthouse against the production URL for accurate results.

Why it's a problem: Not setting up metric cleanup, allowing the performance_metrics table to grow indefinitely

How to avoid: Create a Scheduled Deployment or cron job that deletes metrics older than 30 days. Replit's PostgreSQL has a 10 GB limit.

Best practices

  • Check the Resources panel regularly during development to catch memory leaks and CPU spikes early
  • Run Lighthouse audits against your deployed production URL for accurate scores, not just in the Preview pane
  • Use process.hrtime.bigint() for sub-millisecond timing precision instead of Date.now()
  • Only store metrics above a duration threshold to avoid filling your database with normal fast requests
  • Set up a scheduled cleanup task to delete performance metrics older than 30 days
  • Flag slow requests (over 1 second) with a console.warn so they stand out in logs
  • Protect performance report endpoints with an admin key stored in Replit Secrets
  • Track both server-side response times and client-side page load metrics for a complete picture

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I have a React + Express app on Replit with a PostgreSQL database. Help me add performance monitoring that tracks API response times, stores metrics in the database, and provides a report endpoint showing average, median, and p95 response times per endpoint over configurable time periods.

Replit Prompt

Add performance monitoring to my Express server. Create a timing middleware that logs request duration for every API call and stores slow requests in a performance_metrics database table. Build a GET /api/performance endpoint that returns average, median, and p95 response times grouped by route. Add a cleanup function to delete metrics older than 30 days.

Frequently asked questions

The Starter plan includes 1 vCPU and 2 GiB RAM. Core includes 4 vCPU and 8 GiB RAM. Pro includes 4+ vCPU and 8+ GiB RAM. Check the Resources panel to see your current allocation and usage.

Reduce memory usage by processing data in smaller chunks, removing unused dependencies, and setting --max-old-space-size for Node.js (for example, node --max-old-space-size=4096 index.js). If you are on the Starter plan, upgrade to Core for 8 GiB RAM.

Yes. You can install APM agents from New Relic, Datadog, or Sentry in your Replit app. Store their API keys in Secrets and follow their standard Node.js installation guides. These provide more advanced dashboards and alerting than a custom solution.

The Resources panel shows workspace resource usage, not deployment resource usage. For deployed apps, check the deployment logs and use your custom performance metrics endpoint to monitor performance.

Run a Lighthouse audit after every significant frontend change and before every deployment. Set a baseline score and track improvements over time. Aim for a Performance score above 90 for the best user experience.

Yes. RapidDev's engineering team can audit your application architecture, optimize database queries, implement caching strategies, and set up production-grade monitoring for Replit-hosted applications.

For most web applications, a p95 response time under 500ms is good, and under 200ms is excellent. If your p95 exceeds 1 second, investigate the slowest requests for database query optimization, missing indexes, or external API bottlenecks.

Each metric row is approximately 100 to 200 bytes. At 10,000 logged requests per day, you would use roughly 60 MB per month. With a 30-day cleanup policy and a 50ms logging threshold, storage impact is minimal relative to Replit's 10 GB PostgreSQL limit.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.