Back to guide/General Productivity

A/B Test Tracker

Including adversarial test cases and a decision framework prevents the common mistake of picking a winner based only on happy-path performance.

prompt-version-controltask_descriptionversion_aversion_b
Edit View
Prompt
I'm A/B testing two versions of a prompt for {{task_description}}.\n\nVersion A:\n{{version_a}}\n\nVersion B:\n{{version_b}}\n\nEvaluation criteria:\n{{evaluation_criteria}}\n\nDesign an A/B test plan that includes:\n1. Sample size recommendation (number of test cases)\n2. Test cases that cover normal, edge, and adversarial inputs\n3. Scoring rubric for each evaluation criterion (1-5 scale with descriptions)\n4. Statistical method for determining a winner\n5. A results template I can fill in as I run tests\n6. Decision framework: when to pick A, pick B, or iterate further

Variables to customize

{{task_description}}{{version_a}}{{version_b}}{{evaluation_criteria}}

Why this prompt works

Including adversarial test cases and a decision framework prevents the common mistake of picking a winner based only on happy-path performance.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.