Prompt Compressor
Explicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Rewrite the following prompt to use fewer tokens while preserving all instructions, constraints, and expected output format. Do not remove any functional requirements.\n\nOriginal prompt:\n{{original_prompt}}\n\nReturn:\n1. Compressed prompt\n2. Original vs compressed token count estimate\n3. Percentage reduction achieved\n4. Any nuances that might be lost in compressionVariables to customize
Why this prompt works
Explicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.
Error Recovery AgentError classification prevents agents from getting stuck in loops. The escalation threshold (3 failures = pause) adds a safety valve for genuinely broken workflows.