Back to guide/General Productivity

Context Window Optimizer

Giving the model a concrete token budget and task context lets it make intelligent compression decisions rather than arbitrary truncation.

what-are-ai-tokenstoken_limitmodel_namecontent
Edit View
Prompt
You are a context optimization specialist. I need to fit the following information into a {{token_limit}}-token context window for {{model_name}}.\n\nFull content:\n{{content}}\n\nPrioritize information by relevance to this task: {{task_description}}\n\nReturn:\n1. A compressed version that fits within the token budget\n2. What was cut and why\n3. Estimated token count of the compressed version

Variables to customize

{{token_limit}}{{model_name}}{{content}}{{task_description}}

Why this prompt works

Giving the model a concrete token budget and task context lets it make intelligent compression decisions rather than arbitrary truncation.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.