Few-Shot Prompting (Learn by Example)
Few-shot prompting is showing the AI examples of what you want instead of explaining it. Two examples is usually enough for the AI to learn the pattern. This works for formatting, classification, tone matching — anything with a consistent pattern.
Convert the following items into {{output_format}}. Here are examples of the format I want: Input: {{example_input_1}} Output: {{example_output_1}} Input: {{example_input_2}} Output: {{example_output_2}} Now convert these: Input: {{actual_input_1}} Output: Input: {{actual_input_2}} Output:
Variables to customize
Why this prompt works
Few-shot prompting is showing the AI examples of what you want instead of explaining it. Two examples is usually enough for the AI to learn the pattern. This works for formatting, classification, tone matching — anything with a consistent pattern.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Requesting confidence and key phrases forces the model to justify its classification rather than guessing. The structured output format works zero-shot because sentiment analysis is well-understood by LLMs.
Key Information ExtractionListing the exact fields to extract removes guesswork. The 'Not specified' instruction prevents hallucination when information is missing -- a common failure mode without this guardrail.
Question Answering with SourceGrounding the answer in source material and instructing the model to refuse when information is missing dramatically reduces hallucination -- the biggest risk in zero-shot Q&A.
Math Word Problem ReasoningExplicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.