Comparative Analysis Framework
Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.
Compare {{optionA}} vs {{optionB}} for {{useCase}}. **Evaluation criteria (weighted by importance):** {{evaluationCriteria}} **My context:** {{userContext}} For each criterion: 1. Score both options (1-10) with specific justification 2. Cite concrete evidence — benchmarks, case studies, or documented features 3. Note where the comparison is context-dependent (what changes the answer?) Then provide: - Weighted total scores - A "choose A when... choose B when..." summary - The single most important factor that should drive the decision - What I might be overlooking in this comparison
Variables to customize
Why this prompt works
Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.
What you get when you save this prompt
Your workspace unlocks powerful tools to iterate and improve.
AI Optimization
One-click improvement with structure analysis and pattern suggestions.
Version History
Track every edit. Compare versions side-by-side with word-level diffs.
Folders & Tags
Organize your library with nested folders, tags, and drag-and-drop.
$ npm i -g @promptingbox/mcpUse Everywhere
Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.
Your prompts, organized
Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.