Back to guide/General Productivity

Comparative Analysis Framework

Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.

claude-opus-promptsoptionAoptionBuseCase
Edit View
Prompt
Compare {{optionA}} vs {{optionB}} for {{useCase}}.

**Evaluation criteria (weighted by importance):**
{{evaluationCriteria}}

**My context:** {{userContext}}

For each criterion:
1. Score both options (1-10) with specific justification
2. Cite concrete evidence — benchmarks, case studies, or documented features
3. Note where the comparison is context-dependent (what changes the answer?)

Then provide:
- Weighted total scores
- A "choose A when... choose B when..." summary
- The single most important factor that should drive the decision
- What I might be overlooking in this comparison

Variables to customize

{{optionA}}{{optionB}}{{useCase}}{{evaluationCriteria}}{{userContext}}

Why this prompt works

Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.