Token Budget Planner
Breaking the context window into explicit budget categories prevents the common failure of running out of space for the output or losing critical context.
I'm building an AI feature with a {{context_window_size}}-token context window. Help me allocate the token budget across these components:\n\n- System prompt: {{system_prompt_description}}\n- User context: {{user_context_type}}\n- Retrieved documents: {{retrieval_description}}\n- Conversation history: {{history_policy}}\n- Output reservation: {{expected_output_length}}\n\nFor each component, recommend:\n1. Token allocation (absolute and percentage)\n2. Compression strategy if it exceeds budget\n3. Priority ranking for when total exceeds the window
Variables to customize
Why this prompt works
Breaking the context window into explicit budget categories prevents the common failure of running out of space for the output or losing critical context.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.