Skip to main content
RapidDev - Software Development Agency

How to Build Resume parser with V0

Build an AI-powered resume parser with V0 using Next.js, Supabase, and OpenAI structured output that extracts names, emails, experience, skills, and education from uploaded PDF resumes. Features drag-and-drop upload, confidence scoring, and editable extraction results — all in about 1-2 hours.

What you'll build

  • Drag-and-drop file upload zone for PDF resumes with progress indicator
  • AI extraction pipeline using OpenAI structured output for reliable JSON parsing
  • Parsed result viewer with editable fields for manual corrections
  • Confidence scoring with color-coded Badges (high/medium/low) per extracted field
  • Batch parsing results Table for processing multiple resumes
  • Supabase Storage private bucket for secure resume file storage
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate10 min read1-2 hoursV0 Premium or higherApril 2026RapidDev Engineering Team
TL;DR

Build an AI-powered resume parser with V0 using Next.js, Supabase, and OpenAI structured output that extracts names, emails, experience, skills, and education from uploaded PDF resumes. Features drag-and-drop upload, confidence scoring, and editable extraction results — all in about 1-2 hours.

What you're building

Manually reading resumes and copying data into systems is tedious and error-prone. HR teams processing hundreds of applications need automated extraction — upload a PDF and get structured data (name, email, experience, skills) in seconds.

V0 generates the upload interface, extraction pipeline, and result viewer from prompts. The core extraction uses OpenAI's structured output mode, which guarantees valid JSON matching your schema, eliminating the unreliable regex parsing of the past.

The architecture uses Next.js App Router with a drag-and-drop upload component, an API route for file upload to Supabase Storage, another API route that extracts text from the PDF and sends it to OpenAI with a strict JSON schema, and Server Actions for manual corrections to extracted data.

Final result

An AI resume parser that accepts PDF uploads, extracts structured candidate data using OpenAI, displays results with confidence scoring and editable fields, and stores everything in a searchable database.

Tech stack

V0AI Code Generator
Next.jsFull-Stack Framework
Tailwind CSSStyling
shadcn/uiComponent Library
SupabaseDatabase
OpenAIAI Extraction

Prerequisites

  • A V0 account (Premium recommended for the extraction pipeline)
  • A Supabase project (free tier works — connect via V0's Connect panel)
  • An OpenAI API key (pay-as-you-go for structured output calls)
  • Sample PDF resumes for testing

Build steps

1

Set up the project and parser schema

Open V0 and create a new project. Use the Connect panel to add Supabase. Create the schema for parsed resumes, extracted candidates, experiences, and education. Set up a private Storage bucket for resume files.

prompt.txt
1// Paste this prompt into V0's AI chat:
2// Build a resume parser. Create a Supabase schema with:
3// 1. parsed_resumes: id (uuid PK), uploader_id (uuid FK), original_file_url (text), original_filename (text), parsed_data (jsonb), confidence_score (numeric), status (text check uploading/parsing/completed/failed), error_message (text nullable), created_at (timestamptz)
4// 2. extracted_candidates: id (uuid PK), parsed_resume_id (uuid FK unique), full_name (text), email (text), phone (text), location (text), summary (text), total_experience_years (numeric), skills (text[]), created_at (timestamptz)
5// 3. extracted_experiences: id (uuid PK), candidate_id (uuid FK), company (text), title (text), start_date (text), end_date (text), description (text), position (integer)
6// 4. extracted_education: id (uuid PK), candidate_id (uuid FK), institution (text), degree (text), field (text), year (text)
7// Create a private Supabase Storage bucket 'resumes'.
8// RLS: authenticated users can CRUD their own parsed_resumes.
9// Generate SQL migration and TypeScript types.

Expected result: Supabase is connected with parser tables and a private resumes Storage bucket. RLS policies protect uploaded files and parsed data.

2

Build the upload interface with drag-and-drop

Create the upload page with a drag-and-drop zone that accepts PDF files. The upload flow stores the file in Supabase Storage and creates a parsed_resumes record with status 'uploading', then triggers extraction.

prompt.txt
1// Paste this prompt into V0's AI chat:
2// Build a resume upload page at app/parser/page.tsx.
3// Requirements:
4// - Drag-and-drop upload zone that accepts PDF files only (max 5MB)
5// - Visual feedback: dashed border on drag-over, file icon, "Drop your resume here" text
6// - On file drop/select:
7// - Show filename and file size
8// - Show Progress bar during upload
9// - Upload to Supabase Storage 'resumes' private bucket
10// - Create parsed_resumes record with status 'uploading'
11// - Call /api/parser/extract to trigger AI extraction
12// - Redirect to /parser/[id] when extraction starts
13// - Below the upload zone: Table of previously parsed resumes
14// - Columns: filename, status Badge (uploading=gray, parsing=yellow, completed=green, failed=red), confidence score, parsed date
15// - Each row links to /parser/[id]
16// - Use shadcn/ui Card for upload zone, Progress, Badge, Table, Skeleton
17// - 'use client' for drag-and-drop and file handling

Expected result: A drag-and-drop upload zone with progress indicator. Uploaded resumes appear in a Table below with status Badges and confidence scores.

3

Create the AI extraction API route

Build the extraction endpoint that reads the uploaded PDF, sends the text to OpenAI with a strict JSON schema, and stores the structured result. Uses OpenAI's structured output mode for guaranteed valid JSON.

app/api/parser/extract/route.ts
1import { NextRequest, NextResponse } from 'next/server'
2import { createClient } from '@supabase/supabase-js'
3
4export const maxDuration = 60
5
6const supabase = createClient(
7 process.env.SUPABASE_URL!,
8 process.env.SUPABASE_SERVICE_ROLE_KEY!
9)
10
11export async function POST(req: NextRequest) {
12 const { resume_id, file_path } = await req.json()
13
14 await supabase
15 .from('parsed_resumes')
16 .update({ status: 'parsing' })
17 .eq('id', resume_id)
18
19 const { data: fileData } = await supabase.storage
20 .from('resumes')
21 .download(file_path)
22
23 if (!fileData) {
24 await supabase.from('parsed_resumes').update({
25 status: 'failed',
26 error_message: 'File not found',
27 }).eq('id', resume_id)
28 return NextResponse.json({ error: 'File not found' }, { status: 404 })
29 }
30
31 const text = await fileData.text()
32
33 const response = await fetch('https://api.openai.com/v1/chat/completions', {
34 method: 'POST',
35 headers: {
36 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
37 'Content-Type': 'application/json',
38 },
39 body: JSON.stringify({
40 model: 'gpt-4o',
41 messages: [
42 {
43 role: 'system',
44 content: 'Extract structured resume data from the following text. Be precise with dates and job titles.',
45 },
46 { role: 'user', content: text },
47 ],
48 response_format: {
49 type: 'json_schema',
50 json_schema: {
51 name: 'resume_extraction',
52 strict: true,
53 schema: {
54 type: 'object',
55 properties: {
56 full_name: { type: 'string' },
57 email: { type: 'string' },
58 phone: { type: 'string' },
59 location: { type: 'string' },
60 summary: { type: 'string' },
61 total_experience_years: { type: 'number' },
62 skills: { type: 'array', items: { type: 'string' } },
63 experiences: {
64 type: 'array',
65 items: {
66 type: 'object',
67 properties: {
68 company: { type: 'string' },
69 title: { type: 'string' },
70 start_date: { type: 'string' },
71 end_date: { type: 'string' },
72 description: { type: 'string' },
73 },
74 required: ['company', 'title', 'start_date', 'end_date', 'description'],
75 additionalProperties: false,
76 },
77 },
78 education: {
79 type: 'array',
80 items: {
81 type: 'object',
82 properties: {
83 institution: { type: 'string' },
84 degree: { type: 'string' },
85 field: { type: 'string' },
86 year: { type: 'string' },
87 },
88 required: ['institution', 'degree', 'field', 'year'],
89 additionalProperties: false,
90 },
91 },
92 },
93 required: ['full_name', 'email', 'phone', 'location', 'summary', 'total_experience_years', 'skills', 'experiences', 'education'],
94 additionalProperties: false,
95 },
96 },
97 },
98 }),
99 })
100
101 const result = await response.json()
102 const parsed = JSON.parse(result.choices[0].message.content)
103
104 const { data: candidate } = await supabase
105 .from('extracted_candidates')
106 .insert({
107 parsed_resume_id: resume_id,
108 full_name: parsed.full_name,
109 email: parsed.email,
110 phone: parsed.phone,
111 location: parsed.location,
112 summary: parsed.summary,
113 total_experience_years: parsed.total_experience_years,
114 skills: parsed.skills,
115 })
116 .select()
117 .single()
118
119 if (candidate && parsed.experiences) {
120 await supabase.from('extracted_experiences').insert(
121 parsed.experiences.map((exp: any, i: number) => ({
122 candidate_id: candidate.id,
123 ...exp,
124 position: i + 1,
125 }))
126 )
127 }
128
129 if (candidate && parsed.education) {
130 await supabase.from('extracted_education').insert(
131 parsed.education.map((edu: any) => ({
132 candidate_id: candidate.id,
133 ...edu,
134 }))
135 )
136 }
137
138 await supabase.from('parsed_resumes').update({
139 status: 'completed',
140 parsed_data: parsed,
141 confidence_score: 0.85,
142 }).eq('id', resume_id)
143
144 return NextResponse.json({ success: true, candidate_id: candidate?.id })
145}

Expected result: The extraction API reads the uploaded PDF, sends text to OpenAI with structured output schema, and stores extracted data in normalized tables.

4

Build the parsed result viewer with editable fields

Create the result page showing extracted data with editable fields for manual corrections. Each field has a confidence indicator, and users can fix any extraction errors before finalizing.

prompt.txt
1// Paste this prompt into V0's AI chat:
2// Build a parsed result page at app/parser/[id]/page.tsx.
3// Requirements:
4// - Fetch the parsed_resume with extracted_candidates, experiences, and education
5// - If status='parsing', show Skeleton loading with "AI is extracting data..." message
6// - If status='completed', show editable result:
7// - Personal Info Card: Input fields for name, email, phone, location (pre-filled with extracted data)
8// - Summary Textarea (pre-filled)
9// - Skills: Badge list with X to remove, Input to add new skills
10// - Experiences: Cards for each with editable company, title, dates, description Inputs
11// - Education: Cards for each with editable institution, degree, field, year
12// - Badge next to each field showing confidence (high=green, medium=yellow, low=red)
13// - "Save Corrections" Button calls Server Action updateParsedField()
14// - "Export JSON" Button downloads the extracted data as a JSON file
15// - If status='failed', show error message with AlertDialog to retry
16// - Use shadcn/ui Card, Input, Badge, Textarea, Separator, Skeleton, Toast

Pro tip: Use V0's Vars tab for storing OPENAI_API_KEY without NEXT_PUBLIC_ prefix — it is a secret key for server-side extraction only.

Expected result: A result page showing extracted resume data with editable fields, confidence Badges, and save/export functionality. AI-extracted values are pre-filled and ready for human review.

Complete code

app/api/parser/extract/route.ts
1import { NextRequest, NextResponse } from 'next/server'
2import { createClient } from '@supabase/supabase-js'
3
4export const maxDuration = 60
5
6const supabase = createClient(
7 process.env.SUPABASE_URL!,
8 process.env.SUPABASE_SERVICE_ROLE_KEY!
9)
10
11export async function POST(req: NextRequest) {
12 const { resume_id, file_path } = await req.json()
13
14 await supabase
15 .from('parsed_resumes')
16 .update({ status: 'parsing' })
17 .eq('id', resume_id)
18
19 const { data: file } = await supabase.storage
20 .from('resumes')
21 .download(file_path)
22
23 if (!file) {
24 await supabase.from('parsed_resumes').update({
25 status: 'failed',
26 error_message: 'File download failed',
27 }).eq('id', resume_id)
28 return NextResponse.json({ error: 'File not found' }, { status: 404 })
29 }
30
31 const text = await file.text()
32
33 const res = await fetch('https://api.openai.com/v1/chat/completions', {
34 method: 'POST',
35 headers: {
36 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
37 'Content-Type': 'application/json',
38 },
39 body: JSON.stringify({
40 model: 'gpt-4o',
41 messages: [
42 { role: 'system', content: 'Extract structured data from this resume text.' },
43 { role: 'user', content: text },
44 ],
45 response_format: {
46 type: 'json_schema',
47 json_schema: {
48 name: 'resume',
49 strict: true,
50 schema: {
51 type: 'object',
52 properties: {
53 full_name: { type: 'string' },
54 email: { type: 'string' },
55 phone: { type: 'string' },
56 skills: { type: 'array', items: { type: 'string' } },
57 },
58 required: ['full_name', 'email', 'phone', 'skills'],
59 additionalProperties: false,
60 },
61 },
62 },
63 }),
64 })
65
66 const data = await res.json()
67 const parsed = JSON.parse(data.choices[0].message.content)
68
69 await supabase.from('parsed_resumes').update({
70 status: 'completed',
71 parsed_data: parsed,
72 }).eq('id', resume_id)
73
74 return NextResponse.json({ success: true })
75}

Customization ideas

Add batch processing

Upload multiple resumes at once and process them in parallel, showing a progress dashboard with individual file status indicators.

Build skill matching against job requirements

Compare extracted skills against a job description to generate a match percentage and highlight missing qualifications.

Add DOCX support

Extend the parser to handle DOCX files by extracting text using a server-side DOCX parser library before sending to OpenAI.

Integrate with recruitment pipeline

Connect parsed resumes to the recruitment platform so extracted data automatically populates candidate profiles when applications are submitted.

Common pitfalls

Pitfall: Using regex to parse resume text instead of AI structured output

How to avoid: Use OpenAI's structured output mode (response_format: json_schema) with a strict schema. This guarantees valid JSON matching your expected structure regardless of the resume format.

Pitfall: Not setting maxDuration for the extraction API route

How to avoid: Set export const maxDuration = 60 in the extract API route to allow sufficient time for the full extraction pipeline.

Pitfall: Exposing OPENAI_API_KEY with NEXT_PUBLIC_ prefix

How to avoid: Store OPENAI_API_KEY in the Vars tab without any prefix. Only call OpenAI from API routes (server-side), never from client components.

Best practices

  • Use OpenAI structured output (json_schema) for guaranteed valid JSON extraction results
  • Set maxDuration = 60 in the extraction API route for the full file + AI processing pipeline
  • Store OPENAI_API_KEY in Vars tab without NEXT_PUBLIC_ prefix — server-only for extraction
  • Store uploaded resumes in a Supabase Storage private bucket with signed URLs for authenticated access only
  • Add a human review step with editable fields so extracted data can be corrected before use
  • Use V0's Vars tab for all API keys alongside Supabase credentials from the Connect panel

AI prompts to try

Copy these prompts to build this project faster.

ChatGPT Prompt

I'm building a resume parser with Next.js and OpenAI. Write the extraction prompt and JSON schema for OpenAI's structured output mode that extracts: full_name, email, phone, location, summary, total_experience_years, skills array, experiences array (company, title, start_date, end_date, description), and education array (institution, degree, field, year). Include the full response_format configuration object.

Build Prompt

Create a drag-and-drop file upload component for PDF resumes. Accept only PDF files up to 5MB. Show drag-over visual state with dashed border. Display filename, file size, and upload Progress. On upload, store in Supabase Storage private bucket and return the file path. Include error handling for invalid file types and oversized files. Mark as 'use client'.

Frequently asked questions

What V0 plan do I need for a resume parser?

V0 Free works for the basic build, but Premium ($20/month) is recommended for prompt queuing to generate the upload interface, extraction pipeline, and result viewer more efficiently.

How accurate is the AI extraction?

OpenAI structured output with gpt-4o achieves high accuracy for standard resume formats — names, emails, and dates are extracted reliably. Complex layouts or unusual formatting may need the human review step for corrections.

How much does OpenAI extraction cost per resume?

A typical resume has 500-1000 tokens of text. With gpt-4o at $2.50/$10 per million tokens input/output, parsing costs roughly $0.01-0.03 per resume.

Can it parse DOCX files as well as PDFs?

The base build handles PDF text extraction. For DOCX support, add a server-side DOCX parser library (like mammoth) to extract text before sending to OpenAI.

How do I deploy the resume parser?

Click Share then Publish to Production in V0. Set OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in the Vars tab without NEXT_PUBLIC_ prefix. File uploads and AI extraction run entirely server-side.

Can RapidDev help build a custom resume parsing system?

Yes. RapidDev has built 600+ apps including HR platforms with AI-powered resume parsing, skill matching, and candidate scoring. Book a free consultation to discuss your specific requirements.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help building your app?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.