Research Summary
The [Analysis] tagging technique works across all models to separate facts from interpretation. Suggesting search queries instead of URLs avoids hallucinated links — a common failure mode.
Research and summarize {{topic}} for a {{audience}} audience. Structure: 1. **TL;DR** (2-3 sentences maximum) 2. **Background**: Essential context needed to understand the topic (keep under 150 words) 3. **Current state**: What's happening right now — key players, recent developments, numbers 4. **Key debates**: Where experts disagree and why 5. **What to watch**: 3 specific things that will shape this topic in the next {{timeframe}} 6. **Sources to explore**: Suggest 5 specific search queries to find primary sources on this topic Important: Clearly distinguish between established facts and your analysis/predictions. Use "[Analysis]" tags before opinionated statements.
Variables to customize
Why this prompt works
The [Analysis] tagging technique works across all models to separate facts from interpretation. Suggesting search queries instead of URLs avoids hallucinated links — a common failure mode.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Get thorough code reviews with actionable feedback tailored to your language, framework, and standards.
Context-Aware Code CompletionProviding the surrounding code and project context lets the model match existing patterns exactly. The constraint against modifying existing code prevents unwanted side effects.
Inline Code SuggestionConstraining suggestions to match existing style and scope produces insertions that feel native to the codebase. The 'no explanation' rule mimics real inline completion behavior.
Code ExplanationThe audience level parameter adjusts complexity automatically. Requiring a usage example ensures the explanation is practical, not just theoretical.