Context Window Budget Allocator
Defining format alongside budget ensures each component is not just sized correctly but structured for optimal model comprehension.
I'm building a {{application_type}} with a {{model_name}} backend ({{context_window}} token context window).\n\nMy context needs:\n- System prompt: {{system_prompt_summary}}\n- User query: variable length\n- Retrieved documents: {{retrieval_description}}\n- Conversation history: {{history_policy}}\n- Tool call results: {{tool_results_description}}\n\nDesign the token budget allocation. For each component:\n1. Allocate a fixed or proportional token budget\n2. Define the compression strategy when it exceeds budget\n3. Set the priority for when total context exceeds the window\n4. Specify the format (raw text, XML tags, JSON, markdown)\n\nInclude a fallback strategy for edge cases where a single component exceeds 50% of the window.
Variables to customize
Why this prompt works
Defining format alongside budget ensures each component is not just sized correctly but structured for optimal model comprehension.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.