Back to guide/General Productivity

Retrieval Optimization Prompt

Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.

context-engineering-guideuse_casecurrent_results_countquality_issue
Edit View
Prompt
I'm using RAG (Retrieval-Augmented Generation) for {{use_case}}. My current retrieval pipeline returns {{current_results_count}} chunks but the answers are often {{quality_issue}}.\n\nCurrent setup:\n- Embedding model: {{embedding_model}}\n- Chunk size: {{chunk_size}} tokens\n- Similarity threshold: {{threshold}}\n- Reranking: {{reranking_method}}\n\nDiagnose the likely causes and recommend:\n1. Chunk size adjustments\n2. Query reformulation strategies\n3. Reranking improvements\n4. Hybrid search approaches (semantic + keyword)\n5. Context assembly order (how to arrange retrieved chunks in the prompt)\n6. Evaluation metrics to track improvement

Variables to customize

{{use_case}}{{current_results_count}}{{quality_issue}}{{embedding_model}}{{chunk_size}}{{threshold}}{{reranking_method}}

Why this prompt works

Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.

Save this prompt to your library

Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.