Skip to main content
RapidDev - Software Development Agency
bolt-ai-integrationsBolt Chat + API Route

How to Integrate Bolt.new with Screaming Frog

Screaming Frog is a desktop website crawler with no REST API — you cannot call it programmatically from Bolt.new. Instead, export crawl data as CSV from Screaming Frog, upload it to a Bolt app for visualization, build a lightweight web crawler using cheerio for basic page analysis, or use Google Search Console API for cloud-accessible site health data.

What you'll learn

  • Why Screaming Frog has no REST API and what integration alternatives exist for Bolt.new
  • How to build a CSV upload and visualization tool for Screaming Frog export data
  • How to build a lightweight web crawler in Bolt using cheerio for basic SEO analysis
  • How to use Google Search Console API as a cloud-based alternative for site health data
  • How to display crawl findings in a sortable, filterable dashboard in your Bolt app
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate15 min read25 minutesSEOApril 2026RapidDev Engineering Team
TL;DR

Screaming Frog is a desktop website crawler with no REST API — you cannot call it programmatically from Bolt.new. Instead, export crawl data as CSV from Screaming Frog, upload it to a Bolt app for visualization, build a lightweight web crawler using cheerio for basic page analysis, or use Google Search Console API for cloud-accessible site health data.

Using Screaming Frog Data in a Bolt.new App

Screaming Frog SEO Spider is one of the most widely used SEO tools in the world — but it runs entirely as a desktop application with no public REST API. The tool crawls websites locally on your machine, analyzing pages for broken links, missing title tags, duplicate meta descriptions, redirect chains, page speed issues, and hundreds of other SEO signals. Because it has no API, you cannot trigger a crawl from Bolt.new or receive crawl results programmatically.

There are three practical ways to bring Screaming Frog's power into a Bolt app. The first — and most common — approach is building a Screaming Frog data visualizer: export any crawl report from Screaming Frog as a CSV file, then build a Bolt app that accepts that CSV, parses it, and displays the results as an interactive dashboard with sorting, filtering, and priority highlighting. This is especially useful for SEO agencies that run Screaming Frog for clients and want to deliver polished reports rather than raw spreadsheets. The second approach is building a lightweight in-app crawler using cheerio (an HTML parsing library) and fetch, which can check title tags, meta descriptions, and broken links for small sites directly inside Bolt without requiring Screaming Frog at all. The third is using the Google Search Console API as a cloud-native alternative that provides server-side crawl health data accessible via REST.

Screening Frog's desktop-only nature means Bolt's WebContainer architecture is not a limitation here — the constraint is that no HTTP endpoints exist to call. The patterns below work around this honestly rather than pretending a direct API connection is possible.

Integration method

Bolt Chat + API Route

Screaming Frog has no REST API, so there are three practical integration paths with Bolt.new: (1) export CSV data from Screaming Frog and upload it to a Bolt app for visualization and tracking, (2) build a lightweight JavaScript-based web crawler inside Bolt using cheerio for basic on-page SEO analysis, or (3) use the Google Search Console API as a cloud-based alternative for site health data. Each approach covers a different use case, and this guide covers all three.

Prerequisites

  • A Bolt.new project using Next.js (required for server-side cheerio crawling via API routes)
  • Screaming Frog SEO Spider installed locally if you plan to use the CSV import approach
  • A site to crawl — either your own or a client's with permission
  • Basic understanding of HTML structure (title tags, meta descriptions, H1s) for interpreting crawl data
  • A deployed Netlify or Bolt Cloud URL if you need to crawl sites that block localhost requests

Step-by-step guide

1

Understand Screaming Frog's API Situation Honestly

Before building anything, it's important to set accurate expectations. Screaming Frog SEO Spider is a desktop application that crawls websites from your local machine. It does not have a REST API, webhooks, or any cloud service component. There is no endpoint to call from Bolt.new that would trigger a crawl or return crawl data. The company confirmed in their official documentation that they do not offer a public API as of 2026. This means any integration with Bolt.new must take one of three forms: (1) Export data from Screaming Frog manually and import it into a Bolt app, (2) Replicate some of Screaming Frog's functionality using JavaScript tools inside Bolt (cheerio for HTML parsing, fetch for HTTP requests), or (3) Use a different tool that provides similar data via a REST API. For most use cases, building a CSV visualizer is the most practical path because it directly uses data from actual Screaming Frog crawls without reimplementing the crawler. For teams that need automated crawl data without running Screaming Frog manually, the Google Search Console API or SEMrush API are the best alternatives — both provide crawl-health-adjacent data accessible from Bolt. This guide covers all three approaches so you can choose the one that fits your workflow.

Bolt.new Prompt

Install Papa Parse for CSV parsing and cheerio for HTML analysis. Run: add papaparse @types/papaparse cheerio @types/cheerio to my project dependencies.

Paste this in Bolt.new chat

lib/seo-deps-check.ts
1// Install dependencies first
2// Run in Bolt's terminal:
3// npm install papaparse cheerio
4// npm install -D @types/papaparse @types/cheerio
5
6// Verify installation in your component:
7import Papa from 'papaparse';
8import * as cheerio from 'cheerio';
9
10console.log('Papa Parse version:', Papa.SCRIPT_PATH);
11console.log('Cheerio available:', typeof cheerio.load === 'function');

Pro tip: Papa Parse is a pure JavaScript CSV parser — it works perfectly in Bolt's WebContainer. Cheerio is also pure JavaScript (jQuery for the server) and works in WebContainers. Both install in under 500ms from Bolt's CDN-backed npm cache.

Expected result: Papa Parse and cheerio are installed and importable. No native module compilation errors — both are pure JavaScript packages fully compatible with Bolt's WebContainer.

2

Build a Screaming Frog CSV Upload and Visualizer

The most practical integration with Screaming Frog in Bolt.new is building a client-side CSV visualizer. When you run a crawl in Screaming Frog, you can export any report as a CSV: go to Reports → Bulk Export → All Tab, or right-click any column view and click Export. The resulting CSV contains columns like Address, Content Type, Status Code, Title 1, Title 1 Length, Meta Description 1, Meta Description 1 Length, H1-1, Word Count, Response Time (ms), and many more depending on what was crawled. Papa Parse processes the CSV entirely in the browser — no server call needed. The parsed data populates a React state array that drives a sortable, filterable table. Key filtering use cases: show only 404 errors to find broken pages, show only pages with title length over 60 characters to catch oversized titles, show only pages with no H1 tag. Color coding makes critical issues immediately visible without requiring the user to understand the raw numbers.

Bolt.new Prompt

Build a Screaming Frog CSV visualizer component at components/ScreamingFrogViewer.tsx. It should: (1) Accept CSV file upload with a drag-and-drop zone, (2) Parse the CSV client-side using Papa Parse with header:true, (3) Display results in a sortable table with columns: Address, Status Code, Title 1, Title 1 Length, Meta Description 1, H1-1, Response Time, (4) Add filter buttons to show: All, 404 errors only, Missing titles, Missing H1s, (5) Highlight rows where Status Code is 404 in red, title length > 60 in yellow, and missing H1 in orange.

Paste this in Bolt.new chat

components/ScreamingFrogViewer.tsx
1// components/ScreamingFrogViewer.tsx
2'use client';
3import { useState, useCallback } from 'react';
4import Papa from 'papaparse';
5
6interface CrawlRow {
7 Address: string;
8 'Status Code': string;
9 'Title 1': string;
10 'Title 1 Length': string;
11 'Meta Description 1': string;
12 'H1-1': string;
13 'Response Time': string;
14 [key: string]: string;
15}
16
17type Filter = 'all' | '404' | 'missing-title' | 'missing-h1';
18
19export default function ScreamingFrogViewer() {
20 const [rows, setRows] = useState<CrawlRow[]>([]);
21 const [filter, setFilter] = useState<Filter>('all');
22
23 const handleFile = useCallback((file: File) => {
24 Papa.parse<CrawlRow>(file, {
25 header: true,
26 skipEmptyLines: true,
27 complete: (results) => setRows(results.data),
28 });
29 }, []);
30
31 const filtered = rows.filter((row) => {
32 if (filter === '404') return row['Status Code'] === '404';
33 if (filter === 'missing-title') return !row['Title 1']?.trim();
34 if (filter === 'missing-h1') return !row['H1-1']?.trim();
35 return true;
36 });
37
38 const rowStyle = (row: CrawlRow) => {
39 if (row['Status Code'] === '404') return { background: '#fee2e2' };
40 if (parseInt(row['Title 1 Length']) > 60) return { background: '#fef3c7' };
41 if (!row['H1-1']?.trim()) return { background: '#fed7aa' };
42 return {};
43 };
44
45 return (
46 <div style={{ padding: 24 }}>
47 <h2>Screaming Frog Report Viewer</h2>
48 <input
49 type="file"
50 accept=".csv"
51 onChange={(e) => e.target.files?.[0] && handleFile(e.target.files[0])}
52 />
53 <div style={{ margin: '12px 0', display: 'flex', gap: 8 }}>
54 {(['all', '404', 'missing-title', 'missing-h1'] as Filter[]).map((f) => (
55 <button
56 key={f}
57 onClick={() => setFilter(f)}
58 style={{ fontWeight: filter === f ? 'bold' : 'normal' }}
59 >
60 {f === 'all' ? `All (${rows.length})` : f}
61 </button>
62 ))}
63 </div>
64 {filtered.length > 0 && (
65 <table style={{ borderCollapse: 'collapse', width: '100%', fontSize: 13 }}>
66 <thead>
67 <tr>
68 {['Address', 'Status Code', 'Title 1', 'Title 1 Length', 'H1-1', 'Response Time'].map((h) => (
69 <th key={h} style={{ border: '1px solid #ccc', padding: 6, background: '#f3f4f6' }}>{h}</th>
70 ))}
71 </tr>
72 </thead>
73 <tbody>
74 {filtered.map((row, i) => (
75 <tr key={i} style={rowStyle(row)}>
76 {['Address', 'Status Code', 'Title 1', 'Title 1 Length', 'H1-1', 'Response Time'].map((col) => (
77 <td key={col} style={{ border: '1px solid #ccc', padding: 6, maxWidth: 200, overflow: 'hidden', textOverflow: 'ellipsis', whiteSpace: 'nowrap' }}>
78 {row[col] ?? ''}
79 </td>
80 ))}
81 </tr>
82 ))}
83 </tbody>
84 </table>
85 )}
86 </div>
87 );
88}

Pro tip: Screaming Frog exports can be large — 50,000+ rows for big sites. Papa Parse's streaming mode (using step callback instead of complete) prevents memory issues for very large files. For files under 10,000 rows, the complete callback works fine.

Expected result: A CSV upload component renders. Dropping a Screaming Frog export CSV populates the table. Filter buttons show only 404 pages, pages with missing titles, or pages with missing H1 tags. Problem rows are highlighted in color.

3

Build a Lightweight On-Page SEO Analyzer with Cheerio

For teams that don't want to run Screaming Frog at all, you can build basic SEO analysis functionality directly in Bolt using a Next.js API route and cheerio. The route fetches a given URL server-side, parses the HTML, and extracts key SEO elements: title tag text and character count, meta description text and character count, H1 tag(s), canonical URL, robots meta tag, Open Graph title and description, image alt text completeness, and internal vs external link counts. This covers the most common issues Screaming Frog is used to detect and works for any single URL without installing desktop software. Important note about Bolt's WebContainer: the cheerio analysis must run in a Next.js API route (server-side), not in the browser. The reason is CORS — most websites don't whitelist StackBlitz's WebContainer origins, so a direct browser `fetch()` to an external URL will fail. The Next.js API route runs server-side, bypasses CORS entirely, and returns the analysis result to the browser.

Bolt.new Prompt

Create a Next.js API route at app/api/seo/analyze/route.ts that accepts a URL as a query parameter. Fetch the URL's HTML server-side, parse it with cheerio, and return a JSON analysis with: title (text + length + pass/fail for 10-60 chars), metaDescription (text + length + pass/fail for 50-155 chars), h1Count (number of H1 tags), canonicalUrl (string), robotsMeta (string), imagesWithoutAlt (count). Return 400 if no URL provided, 422 if the URL is unreachable.

Paste this in Bolt.new chat

app/api/seo/analyze/route.ts
1// app/api/seo/analyze/route.ts
2import { NextRequest, NextResponse } from 'next/server';
3import * as cheerio from 'cheerio';
4
5export async function GET(request: NextRequest) {
6 const url = request.nextUrl.searchParams.get('url');
7 if (!url) {
8 return NextResponse.json({ error: 'url parameter is required' }, { status: 400 });
9 }
10
11 let html: string;
12 try {
13 const res = await fetch(url, {
14 headers: { 'User-Agent': 'SEOBot/1.0 (analysis tool)' },
15 signal: AbortSignal.timeout(10000),
16 });
17 if (!res.ok) throw new Error(`HTTP ${res.status}`);
18 html = await res.text();
19 } catch (err) {
20 return NextResponse.json(
21 { error: `Could not fetch URL: ${err instanceof Error ? err.message : 'Unknown'}` },
22 { status: 422 }
23 );
24 }
25
26 const $ = cheerio.load(html);
27
28 const title = $('title').first().text().trim();
29 const metaDesc = $('meta[name="description"]').attr('content') ?? '';
30 const h1s = $('h1').map((_, el) => $(el).text().trim()).get();
31 const canonical = $('link[rel="canonical"]').attr('href') ?? '';
32 const robots = $('meta[name="robots"]').attr('content') ?? '';
33 const imagesWithoutAlt = $('img:not([alt]), img[alt=""]').length;
34
35 return NextResponse.json({
36 url,
37 title: {
38 text: title,
39 length: title.length,
40 pass: title.length >= 10 && title.length <= 60,
41 },
42 metaDescription: {
43 text: metaDesc,
44 length: metaDesc.length,
45 pass: metaDesc.length >= 50 && metaDesc.length <= 155,
46 },
47 h1Count: h1s.length,
48 h1Tags: h1s,
49 canonicalUrl: canonical,
50 robotsMeta: robots,
51 imagesWithoutAlt,
52 });
53}

Pro tip: The server-side fetch in the API route bypasses CORS — it can analyze any public URL regardless of CORS settings. This is why the cheerio analysis must run in an API route, not client-side. Bolt's WebContainer development preview works fine for this outbound crawl.

Expected result: Calling /api/seo/analyze?url=https://example.com returns a JSON object with title, meta description, H1 count, canonical URL, and image alt text analysis.

4

Deploy and Test With Real Sites

The cheerio-based SEO analyzer works in Bolt's WebContainer during development — outbound HTTP requests from Next.js API routes succeed in the WebContainer runtime just as they would on a real server. Test with a few URLs in the Bolt preview to verify the analysis is returning correct data. When you're ready to deploy: connect Netlify via Settings → Applications, click Publish. No environment variables are needed for the cheerio approach since there are no API keys. However, if you add Google Search Console API integration as a bonus feature, you will need to add GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in Netlify's Environment Variables panel after deploying. One important limitation to note: Bolt's WebContainer cannot receive incoming connections, so features like Screaming Frog's automated scheduled crawl notifications (if they ever add webhooks) would require a deployed URL. For the current CSV import and cheerio crawl approaches, this WebContainer limitation is irrelevant — all the work is outbound or client-side. The CSV visualizer works entirely in the browser and needs no server at all. The cheerio analyzer only makes outbound calls.

Bolt.new Prompt

Add a link to my SEO analyzer app in the navigation. Also add a results export button to the Screaming Frog CSV viewer that downloads the filtered results (e.g., only the 404 errors) as a new CSV using Papa Parse's unparse function.

Paste this in Bolt.new chat

components/ScreamingFrogViewer.tsx
1// Add to ScreamingFrogViewer.tsx — export filtered results
2import Papa from 'papaparse';
3
4const exportFiltered = () => {
5 const csv = Papa.unparse(filtered);
6 const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' });
7 const url = URL.createObjectURL(blob);
8 const a = document.createElement('a');
9 a.href = url;
10 a.download = `seo-issues-${filter}-${new Date().toISOString().split('T')[0]}.csv`;
11 a.click();
12 URL.revokeObjectURL(url);
13};
14
15// Add button to the filter row:
16<button onClick={exportFiltered} disabled={filtered.length === 0}>
17 Export {filtered.length} rows as CSV
18</button>

Pro tip: After deploying, test the cheerio analyzer with a site that has known SEO issues to verify the pass/fail logic works correctly. Screaming Frog's own website (screamingfrog.co.uk) is a good test case as it's generally well-optimized.

Expected result: The deployed app allows uploading Screaming Frog CSVs for visualization and running single-URL SEO analysis. The export button downloads filtered results as a new CSV.

Common use cases

Screaming Frog Report Visualizer

Build an internal tool that accepts a Screaming Frog CSV export and transforms it into an interactive dashboard. SEO teams run Screaming Frog weekly on client sites, then use the Bolt app to filter by status code, sort by crawl depth, and highlight pages with missing metadata. This delivers much better client reports than sending a spreadsheet.

Bolt.new Prompt

Build a Screaming Frog CSV report visualizer. Create a file upload page that accepts a CSV file exported from Screaming Frog (columns: Address, Status Code, Title 1, Meta Description 1, H1-1, Word Count, Response Time). Parse the CSV client-side using Papa Parse. Display results in a filterable table where I can filter by status code (200, 301, 404) and sort by response time. Highlight 404 errors in red and missing title tags in yellow.

Copy this prompt to try it in Bolt.new

Lightweight On-Page SEO Analyzer

Build a simple SEO checker in Bolt that crawls a single page and checks for common issues: missing or too-long title tag, missing meta description, missing H1, broken image alt texts, and slow response time. Useful as a quick-check tool when Screaming Frog would be overkill for a single-page analysis.

Bolt.new Prompt

Create an SEO page analyzer using a Next.js API route. When I enter a URL, call /api/seo/analyze which fetches the page HTML server-side, uses cheerio to extract title, meta description, H1 tags, image alt attributes, and canonical URL. Return an analysis object with pass/fail status for each check. Display results as a checklist with green checkmarks for passing checks and red X marks for failures.

Copy this prompt to try it in Bolt.new

Broken Link Checker

Build a broken link checker that takes a starting URL, crawls all internal links on that page, and checks each one for a non-200 status code. Display results in a table grouped by status code. This replicates Screaming Frog's core link audit functionality for small sites directly in Bolt without any external tools.

Bolt.new Prompt

Build a broken link checker using a Next.js API route. Accept a starting URL, fetch the page HTML, extract all href links using cheerio, then check each link's HTTP status code using HEAD requests. Return all links grouped by status code (200 OK, 301 Redirect, 404 Not Found, 500 Error). Limit to 50 links to avoid overwhelming the server. Show results in a color-coded table.

Copy this prompt to try it in Bolt.new

Troubleshooting

Cheerio fetch fails with CORS error when called from a React component

Cause: The fetch is running client-side in the browser, which enforces CORS. External sites don't whitelist Bolt's WebContainer origins.

Solution: Move the fetch call into a Next.js API route (app/api/seo/analyze/route.ts). The API route runs server-side, bypasses CORS entirely, and returns results to the browser. Client-side code should call your own /api/seo/analyze endpoint, never the external URL directly.

typescript
1// WRONG — client-side fetch blocked by CORS
2const html = await fetch('https://example.com').then(r => r.text());
3
4// CORRECT — call your own API route
5const result = await fetch(`/api/seo/analyze?url=${encodeURIComponent(url)}`).then(r => r.json());

Papa Parse returns empty data array after uploading a Screaming Frog CSV

Cause: Screaming Frog's CSV exports sometimes include non-standard line endings or a BOM (byte order mark) at the start of the file, which can confuse CSV parsers.

Solution: Add skipEmptyLines: true and add the encoding option to Papa Parse. Also ensure you're using the header: true option so the first row is treated as column names rather than data.

typescript
1Papa.parse(file, {
2 header: true,
3 skipEmptyLines: true,
4 encoding: 'UTF-8',
5 complete: (results) => {
6 console.log('Parsed rows:', results.data.length);
7 console.log('Errors:', results.errors);
8 setRows(results.data as CrawlRow[]);
9 },
10});

The cheerio API route returns a 422 error for some URLs

Cause: The target site is blocking automated requests, returning a non-200 status code, or timing out. Some sites check User-Agent headers and block unknown bots.

Solution: Try adding a realistic browser User-Agent to the fetch request. Some sites also require Accept headers. If the site consistently blocks automated fetches, the analysis tool cannot crawl it — Screaming Frog faces the same challenge and handles it by running from a real IP address.

typescript
1const res = await fetch(url, {
2 headers: {
3 'User-Agent': 'Mozilla/5.0 (compatible; SEOAnalyzer/1.0)',
4 'Accept': 'text/html,application/xhtml+xml',
5 },
6 signal: AbortSignal.timeout(15000),
7});

Best practices

  • Be honest with users that Screaming Frog has no API — the CSV import and cheerio approaches are legitimate integration patterns, not workarounds
  • Use cheerio's server-side analysis in a Next.js API route, never client-side — CORS will block direct browser fetches to external sites
  • Add a rate limiter or minimum delay between crawl requests if building a multi-URL checker to avoid overwhelming target servers
  • Respect robots.txt when building custom crawlers — check for a disallow directive before crawling a URL
  • Cache cheerio analysis results for at least one hour — page content rarely changes faster than that and caching prevents redundant fetches
  • For large Screaming Frog CSV files (50,000+ rows), use Papa Parse's streaming mode with the step callback to process rows incrementally without memory issues
  • Consider Google Search Console API as a complementary data source — it provides Google's own crawl data for your sites and is free with OAuth authentication

Alternatives

Frequently asked questions

Does Screaming Frog have an API I can call from Bolt.new?

No. Screaming Frog SEO Spider is a desktop application with no public REST API. You cannot trigger a crawl or retrieve crawl data programmatically. The integration options are: export CSV data from Screaming Frog and import it into your Bolt app for visualization, build a JavaScript-based crawler using cheerio for basic analysis, or use a cloud SEO API like SEMrush, Moz, or Google Search Console.

Can I build my own web crawler in Bolt.new that does what Screaming Frog does?

For basic use cases, yes. You can build a Next.js API route that fetches a URL's HTML, parses it with cheerio, and checks for title tags, meta descriptions, H1 tags, and broken links. This handles the most common SEO audit tasks. For enterprise-scale crawling (thousands of pages, JavaScript rendering, visual screenshots), you'd need a more powerful tool like Playwright running outside of Bolt's WebContainer.

Why must the cheerio analysis run server-side in an API route?

Because of CORS. Bolt's WebContainer runs inside a browser, which enforces Cross-Origin Resource Sharing restrictions. Most websites don't whitelist Bolt's StackBlitz origins, so client-side fetch calls to external URLs will fail. A Next.js API route runs server-side, where CORS doesn't apply, so it can fetch any public URL freely.

What's the best alternative to Screaming Frog if I need a REST API for crawl data?

For your own sites, Google Search Console API is the best free option — it provides Google's authentic crawl data, index coverage status, and search performance. For competitive data or any site you don't own, SEMrush provides cloud-based site audit functionality via REST API. Both work well with Bolt.new via Next.js API routes.

Can Bolt's WebContainer limitations affect the cheerio crawler?

No for outbound requests — the API route makes outbound HTTPS calls to target sites, which works fine in Bolt's WebContainer. The only relevant WebContainer limitation is that incoming connections are blocked, meaning you cannot receive webhooks from external services during development. For a crawler that only makes outbound calls, Bolt's preview environment works perfectly.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.