Prompt Evaluation Rubric
A structured rubric turns subjective prompt quality into measurable dimensions. This template is useful for teams building prompt libraries -- evaluate before you save.
Evaluate the following prompt against these quality criteria. Score each 1-5 and explain your rating.
Prompt to evaluate:
"""
{{prompt_to_evaluate}}
"""
Evaluation criteria:
1. **Clarity**: Is the task unambiguous? Could it be misinterpreted?
2. **Specificity**: Are constraints, format, and scope well-defined?
3. **Context**: Does it provide enough background for accurate output?
4. **Structure**: Is it logically organized with clear sections?
5. **Guardrails**: Does it prevent common failure modes (hallucination, off-topic, wrong format)?
For each criterion, provide:
- Score (1-5)
- What works well
- What could be improved
- Suggested rewrite for that aspect
Overall score and top 3 improvements:Variables to customize
Why this prompt works
A structured rubric turns subjective prompt quality into measurable dimensions. This template is useful for teams building prompt libraries -- evaluate before you save.
What you get when you save this prompt
Your workspace unlocks powerful tools to iterate and improve.
AI Optimization
One-click improvement with structure analysis and pattern suggestions.
Version History
Track every edit. Compare versions side-by-side with word-level diffs.
Folders & Tags
Organize your library with nested folders, tags, and drag-and-drop.
$ npm i -g @promptingbox/mcpUse Everywhere
Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.
Your prompts, organized
Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.