Prompt Evaluation Rubric
The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.
Prompt
Create a detailed evaluation rubric for assessing the quality of AI prompts in the {{domain}} domain.\n\nThe rubric should cover these dimensions:\n1. Clarity: is the instruction unambiguous?\n2. Specificity: are constraints and output format defined?\n3. Context: does the prompt provide enough background?\n4. Efficiency: minimal tokens for maximum effect?\n5. Robustness: does it handle variable-quality inputs?\n6. Reusability: can it be templated with variables?\n\nFor each dimension:\n- Define what a score of 1, 3, and 5 looks like (with examples)\n- Provide a one-sentence test: "If you can answer yes to this question, score 4+"\n- List the most common mistake that drops the score\n\nEnd with an overall quality tier: Excellent (25-30), Good (18-24), Needs Work (below 18).Variables to customize
{{domain}}
Why this prompt works
The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.
What you get when you save this prompt
Your workspace unlocks powerful tools to iterate and improve.
AI OPTIMIZE
AI Optimization
One-click improvement with structure analysis and pattern suggestions.
VERSION DIFF
Version History
Track every edit. Compare versions side-by-side with word-level diffs.
ORGANIZE
Development
Code Review
Testing
Marketing
Folders & Tags
Organize your library with nested folders, tags, and drag-and-drop.
MCP
$ npm i -g @promptingbox/mcpClaude · Cursor · ChatGPT
Use Everywhere
Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.
Your prompts, organized
Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.