Prompt Evaluation Rubric
The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.
Create a detailed evaluation rubric for assessing the quality of AI prompts in the {{domain}} domain.\n\nThe rubric should cover these dimensions:\n1. Clarity: is the instruction unambiguous?\n2. Specificity: are constraints and output format defined?\n3. Context: does the prompt provide enough background?\n4. Efficiency: minimal tokens for maximum effect?\n5. Robustness: does it handle variable-quality inputs?\n6. Reusability: can it be templated with variables?\n\nFor each dimension:\n- Define what a score of 1, 3, and 5 looks like (with examples)\n- Provide a one-sentence test: "If you can answer yes to this question, score 4+"\n- List the most common mistake that drops the score\n\nEnd with an overall quality tier: Excellent (25-30), Good (18-24), Needs Work (below 18).Variables to customize
Why this prompt works
The 'one-sentence test' per dimension makes scoring fast and consistent across different evaluators.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.