Prompt Evaluation Rubric
A structured rubric turns subjective prompt quality into measurable dimensions. This template is useful for teams building prompt libraries -- evaluate before you save.
Evaluate the following prompt against these quality criteria. Score each 1-5 and explain your rating.
Prompt to evaluate:
"""
{{prompt_to_evaluate}}
"""
Evaluation criteria:
1. **Clarity**: Is the task unambiguous? Could it be misinterpreted?
2. **Specificity**: Are constraints, format, and scope well-defined?
3. **Context**: Does it provide enough background for accurate output?
4. **Structure**: Is it logically organized with clear sections?
5. **Guardrails**: Does it prevent common failure modes (hallucination, off-topic, wrong format)?
For each criterion, provide:
- Score (1-5)
- What works well
- What could be improved
- Suggested rewrite for that aspect
Overall score and top 3 improvements:Variables to customize
Why this prompt works
A structured rubric turns subjective prompt quality into measurable dimensions. This template is useful for teams building prompt libraries -- evaluate before you save.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.