Retrieval Optimization Prompt
Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.
I'm using RAG (Retrieval-Augmented Generation) for {{use_case}}. My current retrieval pipeline returns {{current_results_count}} chunks but the answers are often {{quality_issue}}.\n\nCurrent setup:\n- Embedding model: {{embedding_model}}\n- Chunk size: {{chunk_size}} tokens\n- Similarity threshold: {{threshold}}\n- Reranking: {{reranking_method}}\n\nDiagnose the likely causes and recommend:\n1. Chunk size adjustments\n2. Query reformulation strategies\n3. Reranking improvements\n4. Hybrid search approaches (semantic + keyword)\n5. Context assembly order (how to arrange retrieved chunks in the prompt)\n6. Evaluation metrics to track improvement
Variables to customize
Why this prompt works
Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.