Test Generation for Cursor
The @-mention gives Cursor the actual file content. Requiring it to check existing test patterns first ensures new tests match the project's style rather than generic defaults.
Generate tests for @{{file_to_test}}. Test framework: {{test_framework}} Test file location: {{test_file_path}} Requirements: - Look at existing tests in the project first (check {{test_directory}}) and match their patterns - Test the public exports only - Include these categories: * Happy path (normal inputs, expected outputs) * Edge cases (empty, null, boundary values) * Error cases (invalid input, network failures, timeouts) - Mock {{dependencies_to_mock}} using the project's existing mock patterns - Each test name should describe the scenario: "should [expected behavior] when [condition]" Do NOT test implementation details like private functions or internal state.
Variables to customize
Why this prompt works
The @-mention gives Cursor the actual file content. Requiring it to check existing test patterns first ensures new tests match the project's style rather than generic defaults.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Get thorough code reviews with actionable feedback tailored to your language, framework, and standards.
Context-Aware Code CompletionProviding the surrounding code and project context lets the model match existing patterns exactly. The constraint against modifying existing code prevents unwanted side effects.
Inline Code SuggestionConstraining suggestions to match existing style and scope produces insertions that feel native to the codebase. The 'no explanation' rule mimics real inline completion behavior.
Code ExplanationThe audience level parameter adjusts complexity automatically. Requiring a usage example ensures the explanation is practical, not just theoretical.