Retrieval Optimization Prompt

General Productivitycontext-engineering-guideuse_casecurrent_results_countquality_issue

Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.

Prompt
I'm using RAG (Retrieval-Augmented Generation) for {{use_case}}. My current retrieval pipeline returns {{current_results_count}} chunks but the answers are often {{quality_issue}}.\n\nCurrent setup:\n- Embedding model: {{embedding_model}}\n- Chunk size: {{chunk_size}} tokens\n- Similarity threshold: {{threshold}}\n- Reranking: {{reranking_method}}\n\nDiagnose the likely causes and recommend:\n1. Chunk size adjustments\n2. Query reformulation strategies\n3. Reranking improvements\n4. Hybrid search approaches (semantic + keyword)\n5. Context assembly order (how to arrange retrieved chunks in the prompt)\n6. Evaluation metrics to track improvement

Variables to customize

{{use_case}}{{current_results_count}}{{quality_issue}}{{embedding_model}}{{chunk_size}}{{threshold}}{{reranking_method}}

Why this prompt works

Starting from the observed quality issue and working backward through the pipeline produces targeted fixes instead of generic RAG advice.

What you get when you save this prompt

Your workspace unlocks powerful tools to iterate and improve.

AI OPTIMIZE

AI Optimization

One-click improvement with structure analysis and pattern suggestions.

VERSION DIFF

Version History

Track every edit. Compare versions side-by-side with word-level diffs.

ORGANIZE
Development
Code Review
Testing
Marketing

Folders & Tags

Organize your library with nested folders, tags, and drag-and-drop.

MCP
$ npm i -g @promptingbox/mcp
Claude · Cursor · ChatGPT

Use Everywhere

Access prompts from Claude, Cursor, ChatGPT & more via MCP integration.

Your prompts, organized

Save, version, and access your best prompts across ChatGPT, Claude, Cursor, and more.