AI Prompt Examples

The fastest way to learn prompt engineering is by studying examples that work. Rather than starting from theory, looking at real prompts that produce high-quality outputs teaches you the patterns, structures, and techniques that matter in practice. This collection covers prompts across every major category — software development, marketing, writing, data analysis, education, business operations, and creative work — so you can find examples relevant to your specific needs.

What separates a good prompt example from a mediocre one is transferability. The prompts here are not highly specific one-off instructions — they are templates with clear structure that you can adapt to your own tasks. Each example demonstrates a principle: some show how to use role assignment to shape expertise, others demonstrate chain-of-thought reasoning for complex analysis, and others illustrate how output format specifications eliminate guesswork. Pay attention to how the best examples provide context, set constraints, and define success criteria — these three elements are present in nearly every effective prompt regardless of domain.

Browse the categories below to find examples for your use case, or explore the full template library where the community shares and rates prompts. When you find examples that work, save them to your PromptingBox library so you can customize them for future tasks without searching from scratch. The best prompt engineers are not people who write perfect prompts from memory — they are people who maintain and iterate on a library of proven prompts.

Prompt Examples You Can Copy

Each prompt demonstrates a different technique. Copy, adapt, and save the ones that fit your workflow.

Before/After Comparison

Rewrite the following {{content_type}} to improve clarity, tone, and impact. Show the original and revised version side by side in a markdown table with columns "Before" and "After". Below the table, list 3 specific changes you made and why each one improves the result.

Original:
{{original_text}}
content_typeoriginal_text

Why it works: Requesting a side-by-side format forces the model to make deliberate, explainable edits rather than vague rewrites. The 'why' column builds your intuition for future writing.

Role-Play Expert

You are a {{role}} with 15+ years of experience in {{domain}}. A junior colleague asks you: "{{question}}"

Respond as this expert would — use industry terminology where appropriate, reference real-world patterns you have seen, and give actionable advice rather than theory. If the question has nuance or common misconceptions, address those proactively.
roledomainquestion

Why it works: Assigning a specific expert role with years of experience primes the model to draw on deeper, more specialized knowledge and adopt a mentorship tone instead of generic answers.

Structured Output

Analyze {{topic}} and return your analysis in the following JSON structure:

{
  "summary": "2-3 sentence overview",
  "key_points": ["point 1", "point 2", "point 3"],
  "pros": ["pro 1", "pro 2"],
  "cons": ["con 1", "con 2"],
  "recommendation": "one clear recommendation",
  "confidence": "high | medium | low"
}

Do not include any text outside the JSON block.
topic

Why it works: Specifying an exact output schema eliminates ambiguity and makes the response programmatically parseable. The confidence field adds useful self-assessment.

Chain-of-Thought Reasoning

I need to decide: {{decision}}

Think through this step by step:
1. What are the key factors to consider?
2. What are the possible options?
3. For each option, what are the likely outcomes (best case, worst case, most likely)?
4. What assumptions am I making that could be wrong?
5. Based on this analysis, what do you recommend and why?

Show your reasoning for each step before giving the final recommendation.
decision

Why it works: Explicit step-by-step instructions activate chain-of-thought reasoning, which dramatically improves accuracy on complex decisions. Asking for assumptions catches blind spots.

Few-Shot Learning

Convert the following customer feedback into a structured tag. Here are examples:

Feedback: "The app crashes every time I try to upload a photo"
Tag: bug:upload, severity:critical, component:media

Feedback: "Would love a dark mode option"
Tag: feature-request:ui, priority:low, component:settings

Feedback: "Checkout is confusing, I almost gave up"
Tag: ux-issue:checkout, severity:high, component:payments

Now classify this feedback:
Feedback: "{{customer_feedback}}"
Tag:
customer_feedback

Why it works: Providing 3 labeled examples teaches the model the exact output format and classification logic without needing a long explanation. The model generalizes the pattern to new inputs.

System Prompt Example

You are a {{assistant_name}}, an AI assistant for {{company_name}}.

Core rules:
- Always respond in {{tone}} tone
- Never make up information — if unsure, say "I don't have that information, let me connect you with the team"
- Keep responses under 150 words unless the user asks for detail
- If the user is frustrated, acknowledge their feelings before solving the problem

Knowledge base:
- Product: {{product_description}}
- Pricing: {{pricing_summary}}
- Support hours: {{support_hours}}

Respond to the user's next message following these rules exactly.
assistant_namecompany_nametoneproduct_descriptionpricing_summarysupport_hours

Why it works: System prompts set persistent behavioral boundaries. This example combines role definition, guardrails, tone control, and domain knowledge — the four pillars of effective system prompts.