Build an AI-powered resume parser with V0 using Next.js, Supabase, and OpenAI structured output that extracts names, emails, experience, skills, and education from uploaded PDF resumes. Features drag-and-drop upload, confidence scoring, and editable extraction results — all in about 1-2 hours.
What you're building
Manually reading resumes and copying data into systems is tedious and error-prone. HR teams processing hundreds of applications need automated extraction — upload a PDF and get structured data (name, email, experience, skills) in seconds.
V0 generates the upload interface, extraction pipeline, and result viewer from prompts. The core extraction uses OpenAI's structured output mode, which guarantees valid JSON matching your schema, eliminating the unreliable regex parsing of the past.
The architecture uses Next.js App Router with a drag-and-drop upload component, an API route for file upload to Supabase Storage, another API route that extracts text from the PDF and sends it to OpenAI with a strict JSON schema, and Server Actions for manual corrections to extracted data.
Final result
An AI resume parser that accepts PDF uploads, extracts structured candidate data using OpenAI, displays results with confidence scoring and editable fields, and stores everything in a searchable database.
Tech stack
Prerequisites
- A V0 account (Premium recommended for the extraction pipeline)
- A Supabase project (free tier works — connect via V0's Connect panel)
- An OpenAI API key (pay-as-you-go for structured output calls)
- Sample PDF resumes for testing
Build steps
Set up the project and parser schema
Open V0 and create a new project. Use the Connect panel to add Supabase. Create the schema for parsed resumes, extracted candidates, experiences, and education. Set up a private Storage bucket for resume files.
1// Paste this prompt into V0's AI chat:2// Build a resume parser. Create a Supabase schema with:3// 1. parsed_resumes: id (uuid PK), uploader_id (uuid FK), original_file_url (text), original_filename (text), parsed_data (jsonb), confidence_score (numeric), status (text check uploading/parsing/completed/failed), error_message (text nullable), created_at (timestamptz)4// 2. extracted_candidates: id (uuid PK), parsed_resume_id (uuid FK unique), full_name (text), email (text), phone (text), location (text), summary (text), total_experience_years (numeric), skills (text[]), created_at (timestamptz)5// 3. extracted_experiences: id (uuid PK), candidate_id (uuid FK), company (text), title (text), start_date (text), end_date (text), description (text), position (integer)6// 4. extracted_education: id (uuid PK), candidate_id (uuid FK), institution (text), degree (text), field (text), year (text)7// Create a private Supabase Storage bucket 'resumes'.8// RLS: authenticated users can CRUD their own parsed_resumes.9// Generate SQL migration and TypeScript types.Expected result: Supabase is connected with parser tables and a private resumes Storage bucket. RLS policies protect uploaded files and parsed data.
Build the upload interface with drag-and-drop
Create the upload page with a drag-and-drop zone that accepts PDF files. The upload flow stores the file in Supabase Storage and creates a parsed_resumes record with status 'uploading', then triggers extraction.
1// Paste this prompt into V0's AI chat:2// Build a resume upload page at app/parser/page.tsx.3// Requirements:4// - Drag-and-drop upload zone that accepts PDF files only (max 5MB)5// - Visual feedback: dashed border on drag-over, file icon, "Drop your resume here" text6// - On file drop/select:7// - Show filename and file size8// - Show Progress bar during upload9// - Upload to Supabase Storage 'resumes' private bucket10// - Create parsed_resumes record with status 'uploading'11// - Call /api/parser/extract to trigger AI extraction12// - Redirect to /parser/[id] when extraction starts13// - Below the upload zone: Table of previously parsed resumes14// - Columns: filename, status Badge (uploading=gray, parsing=yellow, completed=green, failed=red), confidence score, parsed date15// - Each row links to /parser/[id]16// - Use shadcn/ui Card for upload zone, Progress, Badge, Table, Skeleton17// - 'use client' for drag-and-drop and file handlingExpected result: A drag-and-drop upload zone with progress indicator. Uploaded resumes appear in a Table below with status Badges and confidence scores.
Create the AI extraction API route
Build the extraction endpoint that reads the uploaded PDF, sends the text to OpenAI with a strict JSON schema, and stores the structured result. Uses OpenAI's structured output mode for guaranteed valid JSON.
1import { NextRequest, NextResponse } from 'next/server'2import { createClient } from '@supabase/supabase-js'34export const maxDuration = 6056const supabase = createClient(7 process.env.SUPABASE_URL!,8 process.env.SUPABASE_SERVICE_ROLE_KEY!9)1011export async function POST(req: NextRequest) {12 const { resume_id, file_path } = await req.json()1314 await supabase15 .from('parsed_resumes')16 .update({ status: 'parsing' })17 .eq('id', resume_id)1819 const { data: fileData } = await supabase.storage20 .from('resumes')21 .download(file_path)2223 if (!fileData) {24 await supabase.from('parsed_resumes').update({25 status: 'failed',26 error_message: 'File not found',27 }).eq('id', resume_id)28 return NextResponse.json({ error: 'File not found' }, { status: 404 })29 }3031 const text = await fileData.text()3233 const response = await fetch('https://api.openai.com/v1/chat/completions', {34 method: 'POST',35 headers: {36 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,37 'Content-Type': 'application/json',38 },39 body: JSON.stringify({40 model: 'gpt-4o',41 messages: [42 {43 role: 'system',44 content: 'Extract structured resume data from the following text. Be precise with dates and job titles.',45 },46 { role: 'user', content: text },47 ],48 response_format: {49 type: 'json_schema',50 json_schema: {51 name: 'resume_extraction',52 strict: true,53 schema: {54 type: 'object',55 properties: {56 full_name: { type: 'string' },57 email: { type: 'string' },58 phone: { type: 'string' },59 location: { type: 'string' },60 summary: { type: 'string' },61 total_experience_years: { type: 'number' },62 skills: { type: 'array', items: { type: 'string' } },63 experiences: {64 type: 'array',65 items: {66 type: 'object',67 properties: {68 company: { type: 'string' },69 title: { type: 'string' },70 start_date: { type: 'string' },71 end_date: { type: 'string' },72 description: { type: 'string' },73 },74 required: ['company', 'title', 'start_date', 'end_date', 'description'],75 additionalProperties: false,76 },77 },78 education: {79 type: 'array',80 items: {81 type: 'object',82 properties: {83 institution: { type: 'string' },84 degree: { type: 'string' },85 field: { type: 'string' },86 year: { type: 'string' },87 },88 required: ['institution', 'degree', 'field', 'year'],89 additionalProperties: false,90 },91 },92 },93 required: ['full_name', 'email', 'phone', 'location', 'summary', 'total_experience_years', 'skills', 'experiences', 'education'],94 additionalProperties: false,95 },96 },97 },98 }),99 })100101 const result = await response.json()102 const parsed = JSON.parse(result.choices[0].message.content)103104 const { data: candidate } = await supabase105 .from('extracted_candidates')106 .insert({107 parsed_resume_id: resume_id,108 full_name: parsed.full_name,109 email: parsed.email,110 phone: parsed.phone,111 location: parsed.location,112 summary: parsed.summary,113 total_experience_years: parsed.total_experience_years,114 skills: parsed.skills,115 })116 .select()117 .single()118119 if (candidate && parsed.experiences) {120 await supabase.from('extracted_experiences').insert(121 parsed.experiences.map((exp: any, i: number) => ({122 candidate_id: candidate.id,123 ...exp,124 position: i + 1,125 }))126 )127 }128129 if (candidate && parsed.education) {130 await supabase.from('extracted_education').insert(131 parsed.education.map((edu: any) => ({132 candidate_id: candidate.id,133 ...edu,134 }))135 )136 }137138 await supabase.from('parsed_resumes').update({139 status: 'completed',140 parsed_data: parsed,141 confidence_score: 0.85,142 }).eq('id', resume_id)143144 return NextResponse.json({ success: true, candidate_id: candidate?.id })145}Expected result: The extraction API reads the uploaded PDF, sends text to OpenAI with structured output schema, and stores extracted data in normalized tables.
Build the parsed result viewer with editable fields
Create the result page showing extracted data with editable fields for manual corrections. Each field has a confidence indicator, and users can fix any extraction errors before finalizing.
1// Paste this prompt into V0's AI chat:2// Build a parsed result page at app/parser/[id]/page.tsx.3// Requirements:4// - Fetch the parsed_resume with extracted_candidates, experiences, and education5// - If status='parsing', show Skeleton loading with "AI is extracting data..." message6// - If status='completed', show editable result:7// - Personal Info Card: Input fields for name, email, phone, location (pre-filled with extracted data)8// - Summary Textarea (pre-filled)9// - Skills: Badge list with X to remove, Input to add new skills10// - Experiences: Cards for each with editable company, title, dates, description Inputs11// - Education: Cards for each with editable institution, degree, field, year12// - Badge next to each field showing confidence (high=green, medium=yellow, low=red)13// - "Save Corrections" Button calls Server Action updateParsedField()14// - "Export JSON" Button downloads the extracted data as a JSON file15// - If status='failed', show error message with AlertDialog to retry16// - Use shadcn/ui Card, Input, Badge, Textarea, Separator, Skeleton, ToastPro tip: Use V0's Vars tab for storing OPENAI_API_KEY without NEXT_PUBLIC_ prefix — it is a secret key for server-side extraction only.
Expected result: A result page showing extracted resume data with editable fields, confidence Badges, and save/export functionality. AI-extracted values are pre-filled and ready for human review.
Complete code
1import { NextRequest, NextResponse } from 'next/server'2import { createClient } from '@supabase/supabase-js'34export const maxDuration = 6056const supabase = createClient(7 process.env.SUPABASE_URL!,8 process.env.SUPABASE_SERVICE_ROLE_KEY!9)1011export async function POST(req: NextRequest) {12 const { resume_id, file_path } = await req.json()1314 await supabase15 .from('parsed_resumes')16 .update({ status: 'parsing' })17 .eq('id', resume_id)1819 const { data: file } = await supabase.storage20 .from('resumes')21 .download(file_path)2223 if (!file) {24 await supabase.from('parsed_resumes').update({25 status: 'failed',26 error_message: 'File download failed',27 }).eq('id', resume_id)28 return NextResponse.json({ error: 'File not found' }, { status: 404 })29 }3031 const text = await file.text()3233 const res = await fetch('https://api.openai.com/v1/chat/completions', {34 method: 'POST',35 headers: {36 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,37 'Content-Type': 'application/json',38 },39 body: JSON.stringify({40 model: 'gpt-4o',41 messages: [42 { role: 'system', content: 'Extract structured data from this resume text.' },43 { role: 'user', content: text },44 ],45 response_format: {46 type: 'json_schema',47 json_schema: {48 name: 'resume',49 strict: true,50 schema: {51 type: 'object',52 properties: {53 full_name: { type: 'string' },54 email: { type: 'string' },55 phone: { type: 'string' },56 skills: { type: 'array', items: { type: 'string' } },57 },58 required: ['full_name', 'email', 'phone', 'skills'],59 additionalProperties: false,60 },61 },62 },63 }),64 })6566 const data = await res.json()67 const parsed = JSON.parse(data.choices[0].message.content)6869 await supabase.from('parsed_resumes').update({70 status: 'completed',71 parsed_data: parsed,72 }).eq('id', resume_id)7374 return NextResponse.json({ success: true })75}Customization ideas
Add batch processing
Upload multiple resumes at once and process them in parallel, showing a progress dashboard with individual file status indicators.
Build skill matching against job requirements
Compare extracted skills against a job description to generate a match percentage and highlight missing qualifications.
Add DOCX support
Extend the parser to handle DOCX files by extracting text using a server-side DOCX parser library before sending to OpenAI.
Integrate with recruitment pipeline
Connect parsed resumes to the recruitment platform so extracted data automatically populates candidate profiles when applications are submitted.
Common pitfalls
Pitfall: Using regex to parse resume text instead of AI structured output
How to avoid: Use OpenAI's structured output mode (response_format: json_schema) with a strict schema. This guarantees valid JSON matching your expected structure regardless of the resume format.
Pitfall: Not setting maxDuration for the extraction API route
How to avoid: Set export const maxDuration = 60 in the extract API route to allow sufficient time for the full extraction pipeline.
Pitfall: Exposing OPENAI_API_KEY with NEXT_PUBLIC_ prefix
How to avoid: Store OPENAI_API_KEY in the Vars tab without any prefix. Only call OpenAI from API routes (server-side), never from client components.
Best practices
- Use OpenAI structured output (json_schema) for guaranteed valid JSON extraction results
- Set maxDuration = 60 in the extraction API route for the full file + AI processing pipeline
- Store OPENAI_API_KEY in Vars tab without NEXT_PUBLIC_ prefix — server-only for extraction
- Store uploaded resumes in a Supabase Storage private bucket with signed URLs for authenticated access only
- Add a human review step with editable fields so extracted data can be corrected before use
- Use V0's Vars tab for all API keys alongside Supabase credentials from the Connect panel
AI prompts to try
Copy these prompts to build this project faster.
I'm building a resume parser with Next.js and OpenAI. Write the extraction prompt and JSON schema for OpenAI's structured output mode that extracts: full_name, email, phone, location, summary, total_experience_years, skills array, experiences array (company, title, start_date, end_date, description), and education array (institution, degree, field, year). Include the full response_format configuration object.
Create a drag-and-drop file upload component for PDF resumes. Accept only PDF files up to 5MB. Show drag-over visual state with dashed border. Display filename, file size, and upload Progress. On upload, store in Supabase Storage private bucket and return the file path. Include error handling for invalid file types and oversized files. Mark as 'use client'.
Frequently asked questions
What V0 plan do I need for a resume parser?
V0 Free works for the basic build, but Premium ($20/month) is recommended for prompt queuing to generate the upload interface, extraction pipeline, and result viewer more efficiently.
How accurate is the AI extraction?
OpenAI structured output with gpt-4o achieves high accuracy for standard resume formats — names, emails, and dates are extracted reliably. Complex layouts or unusual formatting may need the human review step for corrections.
How much does OpenAI extraction cost per resume?
A typical resume has 500-1000 tokens of text. With gpt-4o at $2.50/$10 per million tokens input/output, parsing costs roughly $0.01-0.03 per resume.
Can it parse DOCX files as well as PDFs?
The base build handles PDF text extraction. For DOCX support, add a server-side DOCX parser library (like mammoth) to extract text before sending to OpenAI.
How do I deploy the resume parser?
Click Share then Publish to Production in V0. Set OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in the Vars tab without NEXT_PUBLIC_ prefix. File uploads and AI extraction run entirely server-side.
Can RapidDev help build a custom resume parsing system?
Yes. RapidDev has built 600+ apps including HR platforms with AI-powered resume parsing, skill matching, and candidate scoring. Book a free consultation to discuss your specific requirements.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation