Back to guide/General Productivity

Long Context Processing

DeepSeek-V3 handles long contexts effectively. Instructing it to read the full content before analyzing prevents early-context bias. Asking for a confidence assessment helps you identify where the model might be uncertain due to context length.

deepseek-promptscontentTypetokenEstimateprocessingTask
Edit View
Prompt
I'm providing a large {{contentType}} below (approximately {{tokenEstimate}} tokens). Process it according to these instructions:

**Task:** {{processingTask}}

**Processing rules:**
- Read the entire content before starting your analysis
- Reference specific sections by quoting relevant passages
- If the content contains contradictions, flag them explicitly
- Maintain accuracy — do not infer information that is not present

**Structure your response as:**
1. {{outputSection1}}
2. {{outputSection2}}
3. {{outputSection3}}
4. Confidence assessment: rate your confidence in each section (high/medium/low)

<content>
{{longContent}}
</content>

Variables to customize

{{contentType}}{{tokenEstimate}}{{processingTask}}{{outputSection1}}{{outputSection2}}{{outputSection3}}{{longContent}}

Why this prompt works

DeepSeek-V3 handles long contexts effectively. Instructing it to read the full content before analyzing prevents early-context bias. Asking for a confidence assessment helps you identify where the model might be uncertain due to context length.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.