Quiz & Assessment Generator
Specifying Bloom's taxonomy levels and asking for misconception-based distractors produces assessments that actually measure understanding, not just recall.
Create a {{assessment_type}} assessment for {{grade_level}} {{subject}} on {{topic}}. Learning objectives being assessed: {{objectives}} Cognitive levels to include: {{bloom_levels}} Number of questions: {{num_questions}} Format requirements: - {{num_mc}} multiple choice (4 options each, one correct) - {{num_short}} short answer - {{num_extended}} extended response For multiple choice, include common misconceptions as distractors. For extended response, include the scoring criteria. Provide an answer key with brief explanations for each answer. If applicable, note any accommodations for {{accommodation_needs}}.
Variables to customize
Why this prompt works
Specifying Bloom's taxonomy levels and asking for misconception-based distractors produces assessments that actually measure understanding, not just recall.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Requesting confidence and key phrases forces the model to justify its classification rather than guessing. The structured output format works zero-shot because sentiment analysis is well-understood by LLMs.
Key Information ExtractionListing the exact fields to extract removes guesswork. The 'Not specified' instruction prevents hallucination when information is missing -- a common failure mode without this guardrail.
Question Answering with SourceGrounding the answer in source material and instructing the model to refuse when information is missing dramatically reduces hallucination -- the biggest risk in zero-shot Q&A.
Math Word Problem ReasoningExplicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.