Test Generation with Edge Cases
The 'why this case matters' comment forces ChatGPT to justify each test, filtering out redundant cases. The 'no mocks unless external' rule produces tests that actually test behavior.
Generate tests for this {{language}} code using {{test_framework}}: \`\`\` {{paste your code}} \`\`\` Requirements: - Cover the happy path with realistic data - Test edge cases: empty input, null/undefined, boundary values, very large input - Test error paths: invalid input, network failures, permission errors - Use descriptive test names that explain the expected behavior: "should return empty array when user has no orders" - Arrange-Act-Assert pattern - No mocks unless the dependency is external (database, API, filesystem) For each test, add a brief comment explaining WHY this case matters — what bug would it catch?
Variables to customize
Why this prompt works
The 'why this case matters' comment forces ChatGPT to justify each test, filtering out redundant cases. The 'no mocks unless external' rule produces tests that actually test behavior.
What you get when you save this prompt
Your workspace unlocks powerful tools to iterate and improve.
AI Optimization
One-click improvement with structure analysis and pattern suggestions.
Version History
Track every edit. Compare versions side-by-side with word-level diffs.
Folders & Tags
Organize your library with nested folders, tags, and drag-and-drop.
$ npm i -g @promptingbox/mcpUse Everywhere
Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.
Your prompts, organized
Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.