Back to guide/General Productivity

Prompt Evaluation Rubric

The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.

prompt-engineering-toolsdomain
Edit View
Prompt
Create a detailed evaluation rubric for assessing the quality of AI prompts in the {{domain}} domain.\n\nThe rubric should cover these dimensions:\n1. Clarity: is the instruction unambiguous?\n2. Specificity: are constraints and output format defined?\n3. Context: does the prompt provide enough background?\n4. Efficiency: minimal tokens for maximum effect?\n5. Robustness: does it handle variable-quality inputs?\n6. Reusability: can it be templated with variables?\n\nFor each dimension:\n- Define what a score of 1, 3, and 5 looks like (with examples)\n- Provide a one-sentence test: "If you can answer yes to this question, score 4+"\n- List the most common mistake that drops the score\n\nEnd with an overall quality tier: Excellent (25-30), Good (18-24), Needs Work (below 18).

Variables to customize

{{domain}}

Why this prompt works

The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.