Screaming Frog has no public API — it is a desktop crawler. To use Screaming Frog data in a V0 by Vercel app, export your crawl results as CSV or JSON from the Screaming Frog desktop app, then build a V0 dashboard with a file upload component that parses and visualizes the exported data. This lets you analyze crawl data in a custom branded interface without a live API connection.
Building a Custom SEO Audit Dashboard with Screaming Frog Export Data
Screaming Frog is the gold standard desktop crawler for technical SEO audits, but its built-in reports are fixed in format and difficult to customize or share with clients. Many SEO professionals want to present crawl findings in a custom-branded dashboard that highlights the issues they care about most, uses their agency's design language, and can be shared as a URL rather than a spreadsheet.
Since Screaming Frog has no public API, the integration approach is different from cloud-based tools: export your crawl data directly from the Screaming Frog desktop application as a JSON or CSV file, then upload that file to your V0 dashboard. A Next.js API route on Vercel parses the file and returns structured data. The dashboard renders interactive tables, issue summaries, and priority recommendations based on the imported crawl data.
This approach works with any Screaming Frog export type: the All Exports CSV (which includes all crawled URLs with every column), individual exports for specific issues (broken links, missing meta descriptions, etc.), or the full JSON export available in Screaming Frog's Bulk Export feature. The key is matching your parser to the export format you choose.
Integration method
Because Screaming Frog has no public API, V0 integration uses a file upload approach. Users export crawl data from the Screaming Frog desktop app as JSON or CSV, then upload the file to a V0-generated dashboard. A Next.js API route parses the uploaded file, transforms the crawl data into a structured format, and the dashboard renders issue breakdowns, page reports, and SEO opportunity tables from the parsed data.
Prerequisites
- Screaming Frog SEO Spider installed on your desktop computer (free version crawls up to 500 URLs; paid license for unlimited)
- A V0 account with a Next.js project at v0.dev
- A completed Screaming Frog crawl with exported data (CSV or JSON format)
- Basic familiarity with the Screaming Frog interface for running crawls and exporting data
- A Vercel account with your V0 project connected via GitHub
Step-by-step guide
Export Crawl Data from Screaming Frog
Export Crawl Data from Screaming Frog
The first step is generating the export file from Screaming Frog. The export format you choose determines how your V0 dashboard parses and displays the data. To export all crawl data as a single file in Screaming Frog: run your crawl by entering the URL in the 'Enter URL to spider' field and clicking 'Start'. Wait for the crawl to complete. Then go to File → Export in the menu bar and choose 'All Exports' to export all data as a CSV. Alternatively, go to Bulk Export → All Inlinks to export link data, or use the specific internal tabs (Response Codes, Page Titles, Meta Description, etc.) and use the Export button in the bottom-right corner of each tab to export just that data. For a more structured export, Screaming Frog supports JSON export via the API and command-line interface. However, for the browser-based upload workflow, CSV is simpler to work with since it does not require the Screaming Frog API configuration. The most useful exports for a custom dashboard are: - 'Internal HTML' tab → Export: gives you all HTML pages with title, meta description, H1, word count, response code, and size - Response Codes tab (4xx and 5xx filtered): gives you just the error pages - 'Bulk Export → Response Codes → Client Error (4xx) Inlinks': gives you broken internal links with source and anchor text For a comprehensive audit dashboard, export the 'Internal HTML' tab which includes the most metadata per page. The CSV headers match Screaming Frog's column names exactly — note these because your parser must reference the same column names.
Pro tip: Use the Screaming Frog 'Bulk Export → All Exports' option to get all crawl data in one CSV file rather than exporting each tab separately. This makes the dashboard simpler since it only needs to parse one file format.
Expected result: You have a CSV or JSON export file from Screaming Frog containing crawl data for your target site. The file includes columns for URL, Status Code, Title, Title Length, Meta Description, Meta Description Length, and other SEO metrics.
Create a File Upload API Route
Create a File Upload API Route
Create a Next.js API route that accepts the uploaded Screaming Frog CSV file and parses it into structured data. This route uses Next.js FormData handling to receive the file upload and a CSV parser to extract the data. Create app/api/seo/parse-crawl/route.ts. This route handles a POST request with a multipart form containing the CSV file. Install the csv-parse package (npm install csv-parse) for reliable CSV parsing that handles quoted fields, special characters, and Screaming Frog's specific CSV dialect. Screaming Frog's CSV exports use standard comma separation with the first row as column headers. The exact column names depend on which export tab you used. For the 'Internal HTML' export, key columns include: 'Address' (the URL), 'Status Code', 'Title 1', 'Title 1 Length', 'Meta Description 1', 'Meta Description 1 Length', 'H1-1', 'H1-1 Length', 'Word Count', 'Indexability', 'Indexability Status', 'Inlinks', 'Size' (bytes), and 'Response Time' (milliseconds). The API route reads the uploaded file as text, passes it to csv-parse in synchronous mode to get an array of row objects, then transforms the rows into a clean data structure. Return the parsed data as JSON with summary statistics. For large crawls (10,000+ URLs), the CSV file can be several megabytes. Vercel's serverless functions support request bodies up to 4.5 MB. For larger crawls, consider parsing and storing the data in a database (Neon or Supabase) instead of keeping it in-memory per request.
Create a Next.js API route at app/api/seo/parse-crawl/route.ts that accepts a multipart form POST with a 'file' field containing a Screaming Frog CSV export. Parse the CSV using the csv-parse/sync package. Map Screaming Frog column names to clean field names: Address→url, 'Status Code'→statusCode, 'Title 1'→title, 'Title 1 Length'→titleLength, 'Meta Description 1'→metaDescription, 'Meta Description 1 Length'→metaDescriptionLength, 'H1-1'→h1, 'Word Count'→wordCount, Inlinks→inLinks. Return an array of page objects plus summary stats (total, errorCount, warningCount). Handle parse errors.
Paste this in V0 chat
1import { NextRequest, NextResponse } from 'next/server';2import { parse } from 'csv-parse/sync';34interface CrawlPage {5 url: string;6 statusCode: number;7 title: string;8 titleLength: number;9 metaDescription: string;10 metaDescriptionLength: number;11 h1: string;12 wordCount: number;13 inLinks: number;14 size: number;15 responseTime: number;16 indexability: string;17}1819export async function POST(request: NextRequest) {20 try {21 const formData = await request.formData();22 const file = formData.get('file') as File | null;2324 if (!file) {25 return NextResponse.json({ error: 'No file uploaded' }, { status: 400 });26 }2728 if (!file.name.endsWith('.csv')) {29 return NextResponse.json(30 { error: 'Please upload a CSV file from Screaming Frog' },31 { status: 400 }32 );33 }3435 const csvText = await file.text();3637 // Parse CSV with column headers from first row38 const records = parse(csvText, {39 columns: true, // Use first row as keys40 skip_empty_lines: true,41 trim: true,42 relax_column_count: true,43 }) as Record<string, string>[];4445 if (records.length === 0) {46 return NextResponse.json({ error: 'CSV file is empty' }, { status: 400 });47 }4849 // Detect column name format (Screaming Frog may vary by export type)50 const firstRow = records[0];51 const urlKey =52 'Address' in firstRow ? 'Address'53 : 'URL' in firstRow ? 'URL'54 : Object.keys(firstRow)[0];5556 const pages: CrawlPage[] = records.map((row) => ({57 url: row[urlKey] || '',58 statusCode: parseInt(row['Status Code'] || '0', 10),59 title: row['Title 1'] || row['Title'] || '',60 titleLength: parseInt(row['Title 1 Length'] || row['Title Length'] || '0', 10),61 metaDescription: row['Meta Description 1'] || row['Meta Description'] || '',62 metaDescriptionLength: parseInt(63 row['Meta Description 1 Length'] || row['Meta Description Length'] || '0',64 1065 ),66 h1: row['H1-1'] || row['H1'] || '',67 wordCount: parseInt(row['Word Count'] || '0', 10),68 inLinks: parseInt(row['Inlinks'] || '0', 10),69 size: parseInt(row['Size'] || '0', 10),70 responseTime: parseInt(row['Response Time'] || '0', 10),71 indexability: row['Indexability'] || '',72 })).filter((p) => p.url.length > 0);7374 const summary = {75 total: pages.length,76 errorPages: pages.filter((p) => p.statusCode >= 400).length,77 missingTitles: pages.filter((p) => p.titleLength === 0).length,78 longTitles: pages.filter((p) => p.titleLength > 60).length,79 missingDescriptions: pages.filter((p) => p.metaDescriptionLength === 0).length,80 longDescriptions: pages.filter((p) => p.metaDescriptionLength > 160).length,81 thinContent: pages.filter((p) => p.wordCount > 0 && p.wordCount < 300).length,82 avgResponseTime: Math.round(83 pages.reduce((sum, p) => sum + p.responseTime, 0) / pages.length84 ),85 };8687 return NextResponse.json({ pages, summary });88 } catch (error) {89 console.error('CSV parse error:', error);90 return NextResponse.json(91 { error: 'Failed to parse CSV file. Ensure it is a valid Screaming Frog export.' },92 { status: 500 }93 );94 }95}Pro tip: Install csv-parse with npm install csv-parse. Import as 'csv-parse/sync' for the synchronous parse function. The relax_column_count option is important since Screaming Frog CSVs sometimes have inconsistent column counts.
Expected result: Uploading a Screaming Frog CSV to /api/seo/parse-crawl returns a JSON object with a pages array (one object per URL) and a summary object with counts of errors, warnings, and missing SEO elements.
Generate the SEO Audit Dashboard with V0
Generate the SEO Audit Dashboard with V0
With the parsing API route in place, prompt V0 to generate the dashboard interface. The dashboard needs two main areas: a file upload component for importing the Screaming Frog export, and the visualization components that render after the file is parsed. The upload component should be prominent and clear about what file format to upload. Tell V0 to accept only .csv files and to display a clear instruction: 'Export your crawl from Screaming Frog → File → Export → All Exports, then upload the CSV here.' After a successful upload, the dashboard transitions to showing the crawl data. Ask V0 to use React state to hold the parsed data — upload sends the file to /api/seo/parse-crawl, receives the parsed data as JSON, then stores it in useState. This avoids storing large crawl data on the server and keeps everything client-side after the initial parse. The visualization components should emphasize actionable information: the most common error types at the top (404s, missing titles, duplicate H1s), followed by filterable tables where the user can drill into specific issues. A page-level severity score (calculated from the count of issues per URL) helps prioritize which pages to fix first. Ask V0 to add a 'Download Report' button that generates a filtered CSV from the displayed data. This is useful for sharing specific issue sets with developers or content teams.
Create an SEO audit dashboard. State: showUpload (boolean, default true), crawlData (null or parsed data). Upload view: a large drag-and-drop area that accepts .csv files, POST to /api/seo/parse-crawl as FormData, on success set crawlData and showUpload=false. Dashboard view (when crawlData exists): show summary cards for Total Pages, HTTP Errors, Missing Titles, Missing Meta Descriptions. Below, show a table of all pages with columns: URL (truncated to 60 chars), Status Code (red if >=400), Title Length (orange if >60, red if 0), Meta Desc Length (orange if >160, red if 0), Word Count (grey if <300). Add filter tabs: All / Errors / Title Issues / Meta Issues. Add a 'Upload New File' button to reset. Show loading state during upload.
Paste this in V0 chat
Pro tip: For large crawls with thousands of URLs, ask V0 to implement virtual scrolling or pagination on the data table. Rendering 10,000 table rows at once will freeze the browser — limit visible rows to 100 with a 'Load more' button or page navigation.
Expected result: V0 generates a complete SEO audit dashboard with a drag-and-drop file upload area and, after uploading a Screaming Frog CSV, a visualization with summary cards, filter tabs, and a detailed URL table.
Add Issue-Specific Drill-Down Views
Add Issue-Specific Drill-Down Views
A summary table showing all pages is useful, but SEO professionals typically want to focus on one issue type at a time. Add drill-down views for the most common technical SEO issues: 4xx errors, pages with missing or problematic titles, and pages with thin content. For each issue type, ask V0 to generate a dedicated tab or section that shows only the affected pages with the relevant columns. For example, the 'Title Issues' view shows URL, current title, title length, and a recommendation column ('Missing — add a title' for 0-length, 'Too long — shorten by X characters' for titles over 60 characters, 'OK' for the rest). For the 4xx errors view, include a column showing the number of internal inlinks to each broken page. Pages with many inlinks pointing to them should be prioritized for fixing or redirecting — breaking a highly-linked page causes more crawl equity loss. Add an export function to each view: a button that downloads the current filtered view as a CSV. This is the most requested feature for SEO audit tools — the ability to export a list of URLs to fix and hand off to a developer or content editor. For SEO agencies who want to build a full-featured client reporting platform with automated Screaming Frog crawl scheduling, crawl comparison over time, and branded PDF export, RapidDev can help architect a more comprehensive system.
Add a 'Priority Issues' section below the summary cards. Show three collapsible panels: '4xx Errors' (list pages where statusCode >= 400, show url, status code, inLinks count — sort by inLinks descending), 'Title Problems' (pages where titleLength === 0 or titleLength > 60 — show url, title, length, and issue type label 'Missing' or 'Too Long'), 'Thin Content' (pages where wordCount < 300 and wordCount > 0 — show url, wordCount, and suggestion 'Add more content'). Each panel has a header with the issue count and a 'Download CSV' button that exports just that panel's data.
Paste this in V0 chat
Pro tip: Screaming Frog counts word count differently from SEO tools that use NLP — it counts visible text tokens including navigation and footer text. A page showing 200 words in Screaming Frog may have substantially more unique body content. Factor this into your thin content threshold.
Expected result: The dashboard shows three collapsible issue panels with prioritized lists of 4xx errors (sorted by inlinks), title problems, and thin content pages. Each panel has a CSV download button for handoff to developers or content teams.
Common use cases
Client SEO Audit Dashboard
An SEO agency runs Screaming Frog crawls for each client and wants to present findings in a custom-branded dashboard. After crawling, they export the data as CSV, upload it to the dashboard, and share the URL with the client. The dashboard shows issue counts by category, the top 20 critical pages, and an interactive table for filtering URLs by issue type.
Build an SEO audit dashboard that accepts crawl data via state (already parsed). The data has pages array where each page has url, statusCode, title, titleLength, metaDescription, metaDescriptionLength, h1, h1Length, wordCount, indexability, and inLinks. Show four summary cards: Total Pages, HTTP Errors (statusCode >= 400), Missing Titles (titleLength === 0), and Missing Meta Descriptions (metaDescriptionLength === 0). Below, show a filterable table with columns: URL, Status Code (colored red if >=400), Title Length (red if 0 or >60), Meta Description Length (red if 0 or >160). Add filter tabs: All / Errors / Warnings / OK.
Copy this prompt to try it in V0
Broken Links Report
An in-house SEO manager runs a weekly crawl to find newly broken links. They export the Internal Links report from Screaming Frog, upload it to the V0 dashboard, and see a table of broken links grouped by source page. The dashboard shows the source URL, anchor text, and destination URL for each broken link so they can prioritize fixes.
Create a broken links report dashboard that reads from parsed link data array. Each entry has source, destination, anchorText, statusCode, and type ('internal' or 'external'). Filter to show only entries where statusCode >= 400 or statusCode === 0 (timeout). Group by source URL with a count of broken outbound links per source. Show a summary card with total broken links count. In the table show source URL, anchor text, broken destination URL, and status code badge (red for 4xx, dark red for 5xx, grey for 0/timeout).
Copy this prompt to try it in V0
Page Speed and Crawl Stats Overview
A technical SEO consultant wants to visualize crawl speed metrics alongside content issues to prioritize a site-wide optimization project. The Screaming Frog export includes response times, page sizes, and word counts. V0 generates a visualization showing the distribution of response times, the largest pages by file size, and pages with thin content.
Build a crawl stats visualization page with four chart-style summary sections: Response Time Distribution (group pages into <0.5s, 0.5-1s, 1-2s, >2s buckets and show as a horizontal bar chart), Largest Pages (top 10 by size in KB as a bar chart), Thin Content Pages (wordCount < 300, show as a list), and Duplicate Titles (pages sharing the same title, grouped). Data comes from props.pages array with url, responseTime (ms), size (bytes), wordCount, title. Show counts and percentages.
Copy this prompt to try it in V0
Troubleshooting
CSV parse returns empty pages array or wrong column names
Cause: Screaming Frog uses different column names depending on which tab or export type was used. The 'Internal HTML' export uses 'Title 1' while other exports may use 'Title'. The URL column is always 'Address' for internal exports but may vary.
Solution: Add console.log(Object.keys(records[0])) in your API route to log the actual column names from the uploaded file. Update your column mapping to match the exact names in the file. The parse route includes fallback checks for common column name variations.
1// Debug: log actual column names from the CSV2console.log('CSV columns detected:', Object.keys(records[0]));3// Then update mapping to match the logged column namesFile upload fails with 413 Payload Too Large
Cause: Vercel's serverless functions have a 4.5 MB request body limit. Screaming Frog CSV exports for large sites (50,000+ URLs) can exceed this limit.
Solution: For large crawls, use Screaming Frog's filtered exports rather than exporting all data. Export only the specific issue tabs you need (4xx errors, missing titles, etc.) which are much smaller files. Alternatively, split the full export into multiple smaller files by URL range.
1// In Screaming Frog, export specific tabs rather than all data:2// Response Codes → Filter to 4xx → Export button3// Page Titles → Filter 'Missing' → Export button4// These filtered exports are much smaller than the full exportDashboard shows all pages as having 0 word count
Cause: Word Count is only populated in the 'Internal HTML' Screaming Frog export. If you exported from the Response Codes tab or Inlinks export, word count data is not included.
Solution: Re-export from the 'Internal HTML' tab in Screaming Frog (the main tab showing all internal HTML pages). This tab includes the most complete metadata. After crawling, click the 'Internal HTML' tab in Screaming Frog, then use the Export button in the bottom-right corner.
1// Screaming Frog export path for full page data:2// Crawl your site → Internal tab → Filter: HTML → Export button3// This gives you all the SEO metadata columnsBest practices
- Parse Screaming Frog CSV column names defensively with fallbacks — different Screaming Frog export types and versions use slightly different column name formats.
- Limit the dashboard table to 100 visible rows at a time with pagination for large crawls — rendering thousands of table rows simultaneously will freeze the browser.
- Add a visible instruction in the upload area explaining exactly which Screaming Frog export type to use, since the app only parses certain formats correctly.
- Sort issues by severity and business impact in your dashboard — 4xx pages with many inlinks are more critical than pages with slightly long title tags.
- Include a timestamp or file name display showing which crawl data is loaded so users know how current their analysis is.
- For client-facing dashboards, filter out non-HTML URLs (images, PDFs, CSS files) from the pages array in your API route since clients typically only care about HTML page issues.
- Store the parsed crawl data in browser localStorage or sessionStorage so users can close and reopen the dashboard without re-uploading the file.
Alternatives
SEMrush has a cloud-based Site Audit API that provides live crawl data programmatically without file exports, better for automated reporting but requires a paid SEMrush subscription.
Ahrefs provides a Site Audit API for cloud-based crawl data access, allowing real-time dashboard integration without the export-and-upload workflow required for Screaming Frog.
Moz has a Links API and site metrics API for programmatic access to domain authority and link data, complementary to Screaming Frog's on-page crawl analysis.
Frequently asked questions
Why doesn't Screaming Frog have a public API?
Screaming Frog SEO Spider is a desktop application, not a cloud service. It runs on your local machine or server to crawl websites, which by design means it cannot expose a web API. The tool does have a command-line interface for automation and can save crawl databases for later analysis, but there is no cloud-hosted API endpoint you can call from a web app.
Can I automate the Screaming Frog export and upload process?
Yes. Screaming Frog has a command-line interface that can run scheduled crawls and automatically save exports. On a VPS or cloud server with Screaming Frog installed, you can schedule crawls via cron, automatically export the CSV, and upload it to your V0 dashboard programmatically using a server-side script. This creates an automated weekly SEO audit pipeline.
What is the maximum file size my dashboard can handle?
Vercel's serverless function request body limit is 4.5 MB. A Screaming Frog CSV for a 10,000-URL site is typically 3-8 MB depending on content. For larger crawls, export only specific tabs (not all exports) or filter the export to the specific issue types you need. The filtered exports are much smaller than the full crawl export.
Can I compare two Screaming Frog crawls to see what changed?
This is possible by extending the dashboard to accept two file uploads and compare the parsed arrays. For each URL, check if its status code, title length, or meta description changed between the two crawls. Highlight new errors (present in second crawl but not first) in red and fixed issues (present in first but not second) in green. This creates a useful site health monitoring workflow.
Does Screaming Frog export include Core Web Vitals data?
Screaming Frog can integrate with Google PageSpeed Insights to retrieve Core Web Vitals during a crawl (requires a Google API key). When enabled, the crawl export includes LCP, FID, CLS, and Speed Score columns. These columns will be parsed by your dashboard the same way as other crawl data columns — just add them to your column mapping in the API route.
Can I use this same approach with other desktop crawlers?
Yes. Sitebulb, DeepCrawl (cloud but with CSV export), and Botify all export CSV or JSON formats. Your parsing route can be adapted for any CSV-exporting SEO tool by updating the column name mappings. The same upload-and-parse architecture works for any tool that does not have a public API.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation