How to Write System Prompts

A system prompt is the hidden instruction set that defines how an AI model behaves throughout a conversation. Unlike user prompts which represent individual requests, the system prompt establishes the AI's role, personality, capabilities, constraints, and output format before any user interaction begins. Every major AI model — ChatGPT, Claude, Gemini, Llama — supports system prompts, and writing them well is arguably the highest-leverage prompt engineering skill because a single system prompt shapes every response in a session.

Effective system prompts follow a consistent structure. Start with a role definition that establishes expertise and perspective. Then specify behavioral guidelines: how the AI should respond, what tone to use, how to handle uncertainty, and when to ask clarifying questions instead of guessing. Include output format requirements — whether responses should use markdown, JSON, bullet points, or a specific template. Add explicit constraints for what the AI should avoid: do not make up information, do not use jargon, do not exceed a certain length. Finally, provide examples of ideal responses when the task is complex or nuanced enough to benefit from demonstration.

The most common mistake is writing system prompts that are too vague or too long. A system prompt that says "be helpful and concise" adds almost nothing — the model defaults to this behavior already. Conversely, a 5,000-word system prompt wastes context window space and can actually confuse the model with contradictory instructions. Aim for 200-800 words of clear, specific, non-redundant instructions. Test your system prompts with edge cases: ask the AI to do something outside its defined scope and verify it handles it according to your constraints. Version your system prompts so you can track what changed when behavior shifts.