Prompt Testing Framework
Including adversarial cases and a regression subset catches failures that normal testing misses and makes iteration sustainable.
Design a testing framework for the following prompt:\n\nPrompt under test:\n{{prompt_text}}\n\nExpected use case: {{use_case}}\nTarget model: {{model_name}}\n\nGenerate:\n1. 5 normal test cases (typical inputs with expected outputs)\n2. 3 edge cases (unusual but valid inputs)\n3. 2 adversarial cases (inputs designed to break the prompt)\n4. A scoring rubric (1-5) for evaluating each output on:\n - Accuracy\n - Format compliance\n - Completeness\n - Tone/style match\n5. Pass/fail criteria: what minimum score across all cases means the prompt is production-ready\n6. Regression test subset: the 3 most important cases to re-run after any edit
Variables to customize
Why this prompt works
Including adversarial cases and a regression subset catches failures that normal testing misses and makes iteration sustainable.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.