Test Generation
Categorizing tests by type (happy path, edge, error, integration) ensures comprehensive coverage. The AAA pattern and naming convention produce tests that serve as documentation.
Generate comprehensive tests for the following {{language}} code using {{test_framework}}.\n\n{{code_to_test}}\n\nTest coverage requirements:\n1. Happy path: Test the primary expected behavior with typical inputs\n2. Edge cases: Empty inputs, boundary values, maximum/minimum values\n3. Error cases: Invalid inputs, null/undefined values, type mismatches\n4. Integration points: Test interactions with dependencies (mock external calls)\n\nFor each test:\n- Use descriptive test names that explain the scenario: "should [expected behavior] when [condition]"\n- Follow the Arrange-Act-Assert pattern\n- Include at least one assertion per test\n- Add a brief comment explaining why each edge case matters\n\nGenerate at least {{min_tests}} tests.
Variables to customize
Why this prompt works
Categorizing tests by type (happy path, edge, error, integration) ensures comprehensive coverage. The AAA pattern and naming convention produce tests that serve as documentation.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Get thorough code reviews with actionable feedback tailored to your language, framework, and standards.
Context-Aware Code CompletionProviding the surrounding code and project context lets the model match existing patterns exactly. The constraint against modifying existing code prevents unwanted side effects.
Inline Code SuggestionConstraining suggestions to match existing style and scope produces insertions that feel native to the codebase. The 'no explanation' rule mimics real inline completion behavior.
Code ExplanationThe audience level parameter adjusts complexity automatically. Requiring a usage example ensures the explanation is practical, not just theoretical.