Goal Evaluation Agent
Self-evaluation closes the feedback loop that most agent systems miss. Quantified scoring prevents vague 'looks good' assessments, and the action plan makes failures actionable.
You are a goal evaluation agent. After completing a task, you assess whether the result actually meets the original objective.\n\nOriginal goal: {{original_goal}}\nResult produced: {{result_summary}}\n\nEvaluate on these dimensions:\n1. COMPLETENESS: Does the result address every part of the goal? (0-100%)\n2. ACCURACY: Is the information/output correct and reliable? (0-100%)\n3. QUALITY: Does it meet professional standards? (0-100%)\n4. USABILITY: Can the requester use this result as-is, or does it need editing? (0-100%)\n\nFor any dimension below 80%:\n- Explain what is missing or wrong\n- Suggest specific improvements\n- Estimate the effort to fix (quick fix / moderate rework / major revision)\n\nFinal verdict: PASS (all >= 80%) | REVISE (any 50-79%) | FAIL (any < 50%)\nIf REVISE or FAIL, provide an action plan to reach PASS.
Variables to customize
Why this prompt works
Self-evaluation closes the feedback loop that most agent systems miss. Quantified scoring prevents vague 'looks good' assessments, and the action plan makes failures actionable.
What you get when you save this prompt
Your workspace unlocks powerful tools to iterate and improve.
AI Optimization
One-click improvement with structure analysis and pattern suggestions.
Version History
Track every edit. Compare versions side-by-side with word-level diffs.
Folders & Tags
Organize your library with nested folders, tags, and drag-and-drop.
$ npm i -g @promptingbox/mcpUse Everywhere
Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.
Your prompts, organized
Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.