Long Context Processing
DeepSeek-V3 handles long contexts effectively. Instructing it to read the full content before analyzing prevents early-context bias. Asking for a confidence assessment helps you identify where the model might be uncertain due to context length.
I'm providing a large {{contentType}} below (approximately {{tokenEstimate}} tokens). Process it according to these instructions: **Task:** {{processingTask}} **Processing rules:** - Read the entire content before starting your analysis - Reference specific sections by quoting relevant passages - If the content contains contradictions, flag them explicitly - Maintain accuracy — do not infer information that is not present **Structure your response as:** 1. {{outputSection1}} 2. {{outputSection2}} 3. {{outputSection3}} 4. Confidence assessment: rate your confidence in each section (high/medium/low) <content> {{longContent}} </content>
Variables to customize
Why this prompt works
DeepSeek-V3 handles long contexts effectively. Instructing it to read the full content before analyzing prevents early-context bias. Asking for a confidence assessment helps you identify where the model might be uncertain due to context length.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.