Differentiated Instruction Adapter
Asking for the rationale behind each modification helps teachers understand the differentiation strategy, not just get a worksheet. Specifying ELL proficiency level avoids one-size-fits-all language support.
I have a {{grade_level}} {{subject}} lesson on {{topic}} with this core activity: {{core_activity}} Create 3 differentiated versions: 1. **Advanced learners**: Extend the activity with higher-order thinking (analysis, evaluation, creation). Add complexity without just adding more work. 2. **On-level with support**: Keep the core activity but add scaffolding — sentence starters, graphic organizers, word banks, or step-by-step breakdowns. 3. **English Language Learners ({{ell_proficiency_level}})**: Modify for language accessibility — visual supports, simplified instructions, native language cognates where helpful, and reduced linguistic demand while maintaining content rigor. For each version, explain what you changed and why.
Variables to customize
Why this prompt works
Asking for the rationale behind each modification helps teachers understand the differentiation strategy, not just get a worksheet. Specifying ELL proficiency level avoids one-size-fits-all language support.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Requesting confidence and key phrases forces the model to justify its classification rather than guessing. The structured output format works zero-shot because sentiment analysis is well-understood by LLMs.
Key Information ExtractionListing the exact fields to extract removes guesswork. The 'Not specified' instruction prevents hallucination when information is missing -- a common failure mode without this guardrail.
Question Answering with SourceGrounding the answer in source material and instructing the model to refuse when information is missing dramatically reduces hallucination -- the biggest risk in zero-shot Q&A.
Math Word Problem ReasoningExplicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.