Skip to main content
RapidDev - Software Development Agency
cursor-tutorial

How to Generate Tests Using Cursor

Generate comprehensive unit tests with Cursor by creating test-focused .cursorrules, using the TDD Super-Prompt pattern in Composer, and referencing source files with @file context. Cursor generates tests that cover edge cases, mock dependencies, and target full coverage when given the right rules and prompts.

What you'll learn

  • How to create .cursorrules that enforce test-first code generation
  • How to use the TDD Super-Prompt pattern for automated test loops
  • How to reference source code with @file for accurate test generation
  • How to configure Cursor to generate tests targeting full coverage
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate9 min read15-20 minCursor Pro+, Jest/Vitest, any languageMarch 2026RapidDev Engineering Team
TL;DR

Generate comprehensive unit tests with Cursor by creating test-focused .cursorrules, using the TDD Super-Prompt pattern in Composer, and referencing source files with @file context. Cursor generates tests that cover edge cases, mock dependencies, and target full coverage when given the right rules and prompts.

Why Cursor Generates Better Tests with the Right Setup

Cursor can generate unit tests, but without specific guidance it produces shallow tests that only cover the happy path. This tutorial shows you how to configure Cursor to generate thorough test suites targeting complete coverage. You will learn the TDD Super-Prompt pattern, create test-specific rules, and use Composer's agent mode to run tests and iterate until they all pass. This approach works with Jest, Vitest, pytest, or any test runner.

Prerequisites

  • Cursor installed (Pro recommended for agent mode)
  • A test runner configured in your project (Jest, Vitest, or similar)
  • At least one source file with functions to test
  • Basic familiarity with unit testing concepts

Step-by-step guide

1

Add test generation rules to .cursorrules

Create rules that tell Cursor how to structure tests in your project. Specify your test runner, naming conventions, and coverage expectations. These rules ensure every test Cursor generates follows your team's patterns consistently.

.cursorrules
1# .cursorrules (add to existing file)
2
3## Testing Rules
4- Test runner: Vitest (or Jest)
5- Test file naming: {filename}.test.ts next to source file
6- Test structure: describe block per function, it blocks per scenario
7- Coverage targets: statements 100%, branches 100%, functions 100%, lines 100%
8- Always test: happy path, edge cases, error cases, boundary values
9- Mock external dependencies (HTTP, database, file system) never real I/O
10- Use factory functions for test data, not inline object literals
11- Each test must have exactly one assertion focus (one expect per it block)
12- Include TypeScript types for all mock data

Pro tip: Add a line: 'When writing tests, first list all test cases as comments before implementing them.' This forces Cursor to plan coverage before writing code.

Expected result: Cursor will follow these testing conventions in all generated test files.

2

Create an auto-attaching test rule

Create a .cursor/rules/testing.mdc file that auto-attaches when Cursor encounters test files. This rule includes your test patterns and the assertion library API so Cursor generates valid test syntax without hallucinating methods.

.cursor/rules/testing.mdc
1---
2description: Unit test generation standards
3globs: "**/*.test.ts, **/*.test.tsx, **/*.spec.ts, **/__tests__/**"
4alwaysApply: false
5---
6
7- Import from vitest: describe, it, expect, vi, beforeEach, afterEach
8- Use vi.mock() for module mocking, vi.fn() for function mocks
9- Reset all mocks in beforeEach with vi.clearAllMocks()
10- Test error scenarios with expect(() => fn()).toThrow()
11- Test async errors with expect(fn()).rejects.toThrow()
12- Group tests: describe('functionName', () => { ... })
13- Name pattern: it('should [expected behavior] when [condition]')
14- Always test with undefined, null, empty string, 0, and negative numbers

Expected result: Test rules auto-attach whenever you are editing or generating test files.

3

Generate tests using the TDD Super-Prompt in Composer

Open Composer with Cmd+I and use the TDD Super-Prompt pattern. Reference the source file you want to test with @file. This prompt asks Cursor to write tests first, then optionally implement code, then run tests and iterate. The key is telling Cursor to write failing tests before any implementation.

Cursor Composer prompt
1// Prompt to type in Cursor Composer (Cmd+I):
2// @src/utils/orderCalculator.ts
3// Write a comprehensive test suite for this file.
4// 1. First, list ALL test cases as comments (happy path, edge cases, errors)
5// 2. Then implement each test case
6// 3. Target 100% branch coverage — test every if/else path
7// 4. Mock any external imports
8// 5. Include tests for: null inputs, empty arrays, negative numbers,
9// maximum values, and type edge cases
10// 6. Run the tests with: npx vitest run src/utils/orderCalculator.test.ts
11// 7. Fix any failures and re-run until all pass

Pro tip: Enable YOLO mode in Cursor Settings to let the agent automatically run test commands without asking permission each time. This creates a fast test-fix loop.

Expected result: Cursor generates a complete test file, runs the tests, and iterates until all tests pass.

4

Use Cmd+K for quick single-function test generation

For generating tests for a single function, select the function in your editor, press Cmd+K, and ask for tests inline. This is faster than Composer for individual functions. Cursor will generate a test file or add tests to an existing test file.

Cmd+K inline prompt
1// Select a function in your editor, then press Cmd+K and type:
2// Generate a test suite for this function covering:
3// - Happy path with valid inputs
4// - Edge case: empty input, null, undefined
5// - Error case: invalid types, out-of-range values
6// - Boundary: zero, negative, MAX_SAFE_INTEGER
7// Use vitest, mock external deps with vi.mock()

Pro tip: After generating tests with Cmd+K, open Chat (Cmd+L) and ask: 'What edge cases am I missing in this test file?' Cursor will analyze coverage gaps.

Expected result: A test suite is generated for the selected function with comprehensive case coverage.

5

Ask Cursor to analyze coverage gaps

After generating initial tests, use Chat (Cmd+L) to ask Cursor to identify missing test cases. Reference both the source file and the test file so Cursor can compare what is tested against what exists. This step catches branches and edge cases the initial generation missed.

Cursor Chat prompt
1// Prompt to type in Cursor Chat (Cmd+L):
2// @src/utils/orderCalculator.ts @src/utils/orderCalculator.test.ts
3// Analyze the test coverage for this source file.
4// List every code branch, conditional, and error path that is NOT
5// currently tested. For each gap, provide the exact test case I should
6// add, including the test name and assertion.

Expected result: Cursor lists specific untested branches with ready-to-copy test cases for each gap.

6

Generate test data factories for reuse

Use Composer to generate factory functions that create test data. This eliminates duplicated object literals across test files and makes tests more maintainable. Reference your TypeScript types so the factories produce correctly typed data.

Cursor Composer prompt
1// Prompt to type in Cursor Composer (Cmd+I):
2// @src/types/order.ts @src/types/user.ts
3// Generate a test factory file at src/__tests__/factories.ts
4// Requirements:
5// - Factory function for each type (createMockUser, createMockOrder)
6// - Accept partial overrides: createMockUser({ name: 'Custom' })
7// - Use realistic default values, not 'test' or 'foo'
8// - Auto-increment IDs across calls
9// - Include a reset function for beforeEach blocks

Pro tip: Commit your test factories to Git. Once they exist, Cursor will import from them instead of creating inline test data in new test files.

Expected result: A reusable factory file that all test files can import for consistent mock data generation.

Complete working example

src/utils/orderCalculator.test.ts
1import { describe, it, expect, vi, beforeEach } from 'vitest';
2import {
3 calculateOrderTotal,
4 applyDiscount,
5 validateOrder,
6} from './orderCalculator';
7import { createMockOrder } from '../__tests__/factories';
8
9describe('calculateOrderTotal', () => {
10 it('should return sum of item prices times quantities', () => {
11 const order = createMockOrder({
12 items: [
13 { name: 'Widget', priceCents: 1000, quantity: 2 },
14 { name: 'Gadget', priceCents: 2500, quantity: 1 },
15 ],
16 });
17 expect(calculateOrderTotal(order)).toBe(4500);
18 });
19
20 it('should return 0 for an order with no items', () => {
21 const order = createMockOrder({ items: [] });
22 expect(calculateOrderTotal(order)).toBe(0);
23 });
24
25 it('should handle items with quantity 0', () => {
26 const order = createMockOrder({
27 items: [{ name: 'Widget', priceCents: 1000, quantity: 0 }],
28 });
29 expect(calculateOrderTotal(order)).toBe(0);
30 });
31
32 it('should throw for negative prices', () => {
33 const order = createMockOrder({
34 items: [{ name: 'Widget', priceCents: -100, quantity: 1 }],
35 });
36 expect(() => calculateOrderTotal(order)).toThrow('Invalid price');
37 });
38});
39
40describe('applyDiscount', () => {
41 it('should apply percentage discount correctly', () => {
42 expect(applyDiscount(10000, { type: 'percent', value: 10 })).toBe(9000);
43 });
44
45 it('should apply fixed discount correctly', () => {
46 expect(applyDiscount(10000, { type: 'fixed', value: 2500 })).toBe(7500);
47 });
48
49 it('should never return negative total', () => {
50 expect(applyDiscount(1000, { type: 'fixed', value: 5000 })).toBe(0);
51 });
52
53 it('should clamp percentage to 100', () => {
54 expect(applyDiscount(10000, { type: 'percent', value: 150 })).toBe(0);
55 });
56
57 it('should handle 0 discount', () => {
58 expect(applyDiscount(10000, { type: 'percent', value: 0 })).toBe(10000);
59 });
60});
61
62describe('validateOrder', () => {
63 it('should return valid for a correct order', () => {
64 const order = createMockOrder();
65 expect(validateOrder(order).valid).toBe(true);
66 });
67
68 it('should reject orders with no items', () => {
69 const order = createMockOrder({ items: [] });
70 const result = validateOrder(order);
71 expect(result.valid).toBe(false);
72 expect(result.errors).toContain('Order must have at least one item');
73 });
74});

Common mistakes when generating Tests Using Cursor

Why it's a problem: Not referencing the source file when generating tests

How to avoid: Always use @src/path/to/file.ts in your test generation prompt so Cursor reads the actual implementation.

Why it's a problem: Generating tests and implementation in the same prompt

How to avoid: Use the TDD pattern: generate tests first, review them, then generate or write the implementation separately.

Why it's a problem: Not mocking external dependencies

How to avoid: Add a .cursorrules entry requiring all external I/O to be mocked. Specify your mocking library (vi.mock, jest.mock) in the rules.

Why it's a problem: Accepting Cursor's first test output without coverage analysis

How to avoid: Follow up with a coverage gap analysis prompt referencing both source and test files to identify missing cases.

Best practices

  • Use the TDD Super-Prompt pattern: tests first, then implementation, then run and iterate
  • Reference both source and test files with @file when asking Cursor to improve existing tests
  • Create test data factories and commit them so Cursor reuses them across test files
  • Enable YOLO mode for test runners to create automated test-fix loops in Composer
  • Ask Cursor to list all test cases as comments before implementing them to ensure coverage planning
  • Start a fresh Composer session for each source file's test suite to prevent context pollution
  • Add test-specific .cursor/rules/ with auto-attaching globs for test file patterns
  • Run coverage reports after generation and feed uncovered lines back to Cursor as follow-up prompts

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I have this TypeScript function: [paste function]. Generate a comprehensive Vitest test suite covering: happy path, edge cases (null, undefined, empty, zero, negative), error handling, and boundary values. Use describe/it blocks with descriptive names. Mock all external dependencies.

Cursor Prompt

@src/utils/orderCalculator.ts Write a comprehensive Vitest test suite for this file. First list ALL test cases as comments, then implement each one. Target 100% branch coverage. Test: happy path, null/undefined inputs, empty arrays, negative numbers, boundary values. Mock external imports with vi.mock(). Run tests with npx vitest run and fix failures.

Frequently asked questions

Can Cursor automatically generate tests for an entire project?

Yes, but do it file by file in separate Composer sessions for best results. Use the prompt pattern '@src/path/file.ts Write tests for this file' and let the agent run and iterate. Processing an entire project in one session leads to context overflow and lower quality.

How do I make Cursor generate tests before writing implementation code?

Use the TDD Super-Prompt: explicitly state 'Write failing tests first. Do not write implementation code yet.' After reviewing the tests, start a new prompt to implement code that passes them. Tell Cursor 'Do not modify the tests.'

Why does Cursor generate tests that import functions incorrectly?

This happens when you do not reference the source file with @file. Cursor guesses at exports and file paths. Always include @src/path/to/file.ts in your prompt so Cursor reads the actual module exports.

Can Cursor run my tests and fix failures automatically?

Yes, in Composer Agent mode (Cmd+I). Enable YOLO mode in settings to let Cursor run terminal commands automatically. Prompt it to run tests and iterate until all pass. The agent will read test output, identify failures, and fix them.

How do I get Cursor to mock specific libraries like Axios or Prisma?

Include your mocking pattern in .cursorrules with an example: 'Mock Axios with vi.mock("axios") and return vi.fn() for each method.' Reference the actual import from your source file so Cursor knows exactly which library to mock.

Does Cursor support generating tests in Python or Go?

Yes. Cursor generates tests in any language. Adjust your .cursorrules to specify the test runner (pytest, go test) and assertion patterns. The TDD Super-Prompt pattern works the same way regardless of language.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.