Claude Opus Prompts — Get the Most from Anthropic's Best Model

Claude Opus is Anthropic's most capable model, and it rewards a different prompting approach than Sonnet or Haiku. Where smaller models benefit from concise, direct instructions, Opus thrives with rich context, nuanced constraints, and complex multi-step tasks. Its extended thinking capability means you can ask it to reason through problems before responding — and the quality of that reasoning is noticeably better than other models for tasks involving ambiguity, trade-offs, or multi-layered analysis. For best results, explicitly tell Opus to "think through this step by step" or "consider the trade-offs before recommending an approach." The extended thinking budget allows the model to explore multiple solution paths internally before committing to an answer.

Opus excels at tasks that require holding large amounts of context in mind simultaneously — code reviews across multiple files, long document analysis, and complex system design. With its large context window, you can paste entire codebases, legal documents, or research papers and ask it to synthesize insights across the full input. The key is to be specific about what you want it to focus on within that context. Instead of "review this code," try "review this codebase for security vulnerabilities in the authentication flow, paying special attention to token handling and session management." Specificity combined with large context is where Opus delivers results that smaller models simply cannot match.

For complex writing and analysis, Opus follows style and formatting instructions with exceptional precision. You can specify a detailed persona, writing style, audience level, and output structure, and Opus will maintain consistency throughout long outputs. This makes it ideal for drafting technical documentation, research papers, strategic analyses, and other professional content where quality and consistency matter more than speed. Build system prompts that capture your preferred style and constraints, save them in your library, and reuse them across projects — Opus will follow them faithfully every time.

Claude Opus Prompts You Can Use Right Now

These prompts are designed to leverage Opus's strengths: deep reasoning, long context, and precise instruction following.

Complex Reasoning with Trade-offs

You are a senior {{domain}} advisor. I need you to analyze the following decision:

**Decision:** {{decisionDescription}}

**Context:** {{backgroundContext}}

**Constraints:**
- Budget: {{budget}}
- Timeline: {{timeline}}
- Team: {{teamContext}}

Think through this step by step using extended thinking. For each option:
1. List concrete advantages with evidence
2. List concrete disadvantages and risks
3. Identify hidden second-order effects
4. Estimate likelihood of success (low/medium/high) with reasoning

Then provide your recommendation with a clear rationale. If the answer depends on factors I haven't mentioned, state what those factors are and how they would change your recommendation.
domaindecisionDescriptionbackgroundContextbudgettimelineteamContext

Why it works: Opus's extended thinking excels when explicitly activated. Structuring the analysis with numbered steps prevents the model from jumping to conclusions. Asking for second-order effects and conditional factors leverages Opus's ability to reason about uncertainty.

Long Document Analysis

I'm providing a {{documentType}} below (approximately {{pageCount}} pages). Analyze the entire document with these specific objectives:

**Primary questions:**
{{primaryQuestions}}

**Analysis requirements:**
- Quote specific passages that support your conclusions (include page/section references)
- Flag any internal contradictions or inconsistencies in the document
- Identify claims that lack supporting evidence
- Note any significant omissions — topics you would expect to see covered but are missing

**Output format:**
1. Executive summary (3-5 sentences)
2. Detailed findings organized by my primary questions
3. Contradictions and gaps
4. Recommendations for follow-up

<document>
{{documentContent}}
</document>
documentTypepageCountprimaryQuestionsdocumentContent

Why it works: Opus can hold and reason across an entire long document simultaneously. Asking it to quote specific passages forces grounded analysis instead of vague summaries. The contradictions and omissions checks leverage Opus's ability to cross-reference information across a large context window.

Code Architecture Review

Review the following codebase architecture for a {{projectType}} application.

**Tech stack:** {{techStack}}
**Scale:** {{scaleRequirements}}
**Team size:** {{teamSize}} engineers

I'll provide the key files. For each architectural concern, reference the specific file and line where the issue exists.

Review for:
1. **Separation of concerns** — Are responsibilities cleanly divided? Any God objects or modules doing too much?
2. **Scalability bottlenecks** — What breaks first at 10x current load?
3. **Error handling gaps** — Where can failures cascade or go unhandled?
4. **Testing friction** — What's hard to test due to tight coupling or hidden dependencies?
5. **Security surface** — Auth bypasses, injection points, data exposure risks

For each issue found, provide:
- Severity: critical / high / medium / low
- The specific anti-pattern or violation
- A concrete fix with code example

<codebase>
{{codebaseFiles}}
</codebase>
projectTypetechStackscaleRequirementsteamSizecodebaseFiles

Why it works: Opus can process multiple files simultaneously and cross-reference patterns across them. The severity rating forces prioritization instead of a flat list. Asking for concrete code fixes rather than just descriptions produces actionable output that developers can implement immediately.

Creative Writing with Voice Control

Write a {{contentType}} about {{topic}}.

**Voice & Style:**
- Tone: {{tone}}
- Reading level: {{audienceLevel}}
- Sentence structure: mix short punchy sentences with longer flowing ones. Vary paragraph length.
- Avoid: cliches, corporate jargon, passive voice, and the words "{{bannedWords}}"
- Emulate the style of: {{styleReference}} — but do not copy specific phrases

**Structure:**
- Length: approximately {{wordCount}} words
- Opening: start in medias res or with a surprising fact — no throat-clearing
- Include {{numberOfExamples}} concrete examples or anecdotes
- End with: {{endingStyle}}

**Content constraints:**
{{contentConstraints}}

Write the full piece. Do not include meta-commentary about your writing process.
contentTypetopictoneaudienceLevelbannedWordsstyleReferencewordCountnumberOfExamplesendingStylecontentConstraints

Why it works: Opus maintains voice consistency across long outputs better than any other model. Banning specific words and cliches prevents the default 'AI voice.' Specifying the opening style avoids generic intros. The 'no meta-commentary' instruction stops Opus from narrating its own process.

Research Synthesis

I'm researching {{researchTopic}}. Below are {{sourceCount}} sources I've gathered. Synthesize them into a comprehensive analysis.

**Research question:** {{researchQuestion}}

**Sources:**
{{sources}}

**Synthesis requirements:**
1. Identify the key themes and findings that appear across multiple sources
2. Map where sources agree, where they disagree, and where they cover different aspects
3. Evaluate the strength of evidence — which claims are well-supported vs. speculative?
4. Identify gaps: what questions remain unanswered by these sources?
5. Flag any methodological concerns or potential biases in the sources

**Output:**
- Synthesis narrative (not a source-by-source summary — weave insights together thematically)
- Evidence strength table: claim | supporting sources | confidence level
- Recommended next steps for further research
- Key takeaway in one paragraph
researchTopicsourceCountresearchQuestionsources

Why it works: Opus's large context window lets it hold all sources simultaneously and find cross-cutting patterns. Explicitly asking for thematic synthesis instead of source-by-source summaries produces genuinely useful research output. The evidence strength table forces rigorous evaluation.

Multi-Step Planning with Dependencies

Create a detailed execution plan for: {{projectGoal}}

**Current state:** {{currentState}}
**Target state:** {{targetState}}
**Timeline:** {{timeline}}
**Resources available:** {{resources}}
**Known risks:** {{knownRisks}}

Build the plan with these requirements:
1. Break into phases with clear milestones and exit criteria for each phase
2. For each task, specify: owner role, estimated effort, dependencies (what must complete first), and deliverable
3. Identify the critical path — which sequence of tasks determines the minimum timeline?
4. Build in contingency: for each high-risk task, provide a fallback approach
5. Define go/no-go decision points where the plan should be re-evaluated

Format as a structured plan I can hand to a project manager. Include a dependency graph showing which tasks block which.
projectGoalcurrentStatetargetStatetimelineresourcesknownRisks

Why it works: Opus handles complex dependency reasoning that trips up smaller models. Asking for exit criteria and go/no-go points produces plans that work in reality, not just on paper. The critical path and contingency requirements force the model to think about sequencing and failure modes.

System Prompt for Opus Assistants

You are {{assistantName}}, a {{role}} specializing in {{specialization}}.

## Behavior
- Think deeply before responding. Use extended thinking for complex questions.
- When uncertain, say so explicitly and explain what additional information would help.
- Cite specific evidence for claims. Do not state opinions as facts.
- Push back respectfully when the user's approach has significant issues.

## Communication Style
- {{communicationStyle}}
- Use concrete examples over abstract explanations
- Structure long responses with headers, bullets, and numbered lists
- Front-load the most important information

## Domain Knowledge
{{domainKnowledge}}

## Constraints
- Never fabricate data, statistics, or citations
- If a question is outside your expertise, say so rather than guessing
- {{additionalConstraints}}

## Output Defaults
- Default response length: {{defaultLength}}
- Default format: {{defaultFormat}}
- Always ask clarifying questions when the request is ambiguous
assistantNamerolespecializationcommunicationStyledomainKnowledgeadditionalConstraintsdefaultLengthdefaultFormat

Why it works: This system prompt structure works exceptionally well with Opus because it separates behavioral instructions from domain knowledge. Opus follows the 'push back respectfully' instruction reliably, making it a genuine thinking partner rather than a yes-machine.

Comparative Analysis Framework

Compare {{optionA}} vs {{optionB}} for {{useCase}}.

**Evaluation criteria (weighted by importance):**
{{evaluationCriteria}}

**My context:** {{userContext}}

For each criterion:
1. Score both options (1-10) with specific justification
2. Cite concrete evidence — benchmarks, case studies, or documented features
3. Note where the comparison is context-dependent (what changes the answer?)

Then provide:
- Weighted total scores
- A "choose A when... choose B when..." summary
- The single most important factor that should drive the decision
- What I might be overlooking in this comparison
optionAoptionBuseCaseevaluationCriteriauserContext

Why it works: Opus handles weighted multi-criteria analysis with nuance that simpler models flatten. Asking 'what changes the answer' and 'what am I overlooking' prompts Opus to surface non-obvious considerations. The context-dependent framing prevents false certainty.