The Evaluation Prompt
Using AI to evaluate content (including AI-generated content) is one of the most underused techniques. The structured rubric prevents vague feedback. You can use this to evaluate your own writing, AI output, or anything you need a second opinion on.
Evaluate the following {{content_type}} on these criteria (score 1-10 for each): """ {{content}} """ Criteria: 1. Clarity — Is the message clear and easy to understand? 2. Accuracy — Are the claims correct and well-supported? 3. Completeness — Does it cover all important aspects? 4. Tone — Is the tone appropriate for {{audience}}? 5. Actionability — Can the reader act on this? For each criterion, give the score, one sentence explaining why, and one specific suggestion to improve it. Overall assessment: [one paragraph summary]
Variables to customize
Why this prompt works
Using AI to evaluate content (including AI-generated content) is one of the most underused techniques. The structured rubric prevents vague feedback. You can use this to evaluate your own writing, AI output, or anything you need a second opinion on.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Requesting confidence and key phrases forces the model to justify its classification rather than guessing. The structured output format works zero-shot because sentiment analysis is well-understood by LLMs.
Key Information ExtractionListing the exact fields to extract removes guesswork. The 'Not specified' instruction prevents hallucination when information is missing -- a common failure mode without this guardrail.
Question Answering with SourceGrounding the answer in source material and instructing the model to refuse when information is missing dramatically reduces hallucination -- the biggest risk in zero-shot Q&A.
Math Word Problem ReasoningExplicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.