Performance Metrics Dashboard
Tracking edit rate as a proxy for satisfaction captures cases where output is technically correct but not useful enough to use as-is.
Design a metrics tracking system for prompt version performance.\n\nPrompt name: {{prompt_name}}\nPurpose: {{prompt_purpose}}\nCurrent version: {{current_version_number}}\n\nDefine metrics across these dimensions:\n1. Quality: output accuracy, relevance, completeness (with scoring rubrics)\n2. Efficiency: token usage, response time, cost per request\n3. Reliability: error rate, refusal rate, format compliance rate\n4. User satisfaction: feedback scores, edit rate (how often users modify the output)\n\nFor each metric, specify:\n- How to measure it\n- Baseline from current version\n- Target for next version\n- Alert threshold that triggers investigation
Variables to customize
Why this prompt works
Tracking edit rate as a proxy for satisfaction captures cases where output is technically correct but not useful enough to use as-is.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.