Optimization Workflow
Testing changes independently rather than all at once isolates which modifications actually improve output, following scientific method principles.
I have a prompt that produces {{current_quality}} results but I want to improve it to {{target_quality}}.\n\nCurrent prompt:\n{{current_prompt}}\n\nExample of current output (showing the problem):\n{{current_output_example}}\n\nWhat I want instead:\n{{desired_output_description}}\n\nGuide me through an optimization workflow:\n1. Diagnose: what specifically is causing the quality gap?\n2. Hypothesize: 3 specific changes that could close the gap, ranked by likely impact\n3. Test plan: how to test each change independently\n4. Implement: rewrite the prompt with the top-ranked change applied\n5. Evaluate: what to look for in the new output\n6. Iterate: decision framework for next steps based on results
Variables to customize
Why this prompt works
Testing changes independently rather than all at once isolates which modifications actually improve output, following scientific method principles.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.