Document Chunking Strategy
Specifying semantic coherence and overlap requirements produces a strategy that avoids the common pitfall of losing context at chunk boundaries.
I need to process a {{document_type}} that is approximately {{document_length}} tokens long. The model context window is {{context_window}} tokens, and I need {{reserved_tokens}} tokens reserved for the prompt and output.\n\nDesign a chunking strategy that:\n1. Splits the document into processable chunks\n2. Preserves semantic coherence (don't split mid-paragraph or mid-argument)\n3. Includes overlap between chunks to maintain continuity\n4. Specifies how to merge results from multiple chunks\n\nProvide the chunk size, overlap size, expected number of chunks, and a merging strategy for the final output.
Variables to customize
Why this prompt works
Specifying semantic coherence and overlap requirements produces a strategy that avoids the common pitfall of losing context at chunk boundaries.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.