Comparative Analysis Framework
Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.
Compare {{optionA}} vs {{optionB}} for {{useCase}}. **Evaluation criteria (weighted by importance):** {{evaluationCriteria}} **My context:** {{userContext}} For each criterion: 1. Score both options (1-10) with specific justification 2. Cite concrete evidence — benchmarks, case studies, or documented features 3. Note where the comparison is context-dependent (what changes the answer?) Then provide: - Weighted total scores - A "choose A when... choose B when..." summary - The single most important factor that should drive the decision - What I might be overlooking in this comparison
Variables to customize
Why this prompt works
Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.