Code Debugging
Tracing execution step by step reveals bugs that pattern-matching misses. Restating intent first ensures the model understands the goal before diagnosing the problem.
Debug the following code. Do not just give the fix — walk through your reasoning so I understand the root cause. Language: {{language}} Code: {{code_snippet}} Expected behavior: {{expected_behavior}} Actual behavior: {{actual_behavior}} Debug step by step: 1. Read the code and restate what it's supposed to do 2. Trace through the execution mentally, noting variable states 3. Identify where the actual behavior diverges from expected 4. Explain the root cause of the bug 5. Provide the corrected code with comments on what changed
Variables to customize
Why this prompt works
Tracing execution step by step reveals bugs that pattern-matching misses. Restating intent first ensures the model understands the goal before diagnosing the problem.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Requesting confidence and key phrases forces the model to justify its classification rather than guessing. The structured output format works zero-shot because sentiment analysis is well-understood by LLMs.
Key Information ExtractionListing the exact fields to extract removes guesswork. The 'Not specified' instruction prevents hallucination when information is missing -- a common failure mode without this guardrail.
Question Answering with SourceGrounding the answer in source material and instructing the model to refuse when information is missing dramatically reduces hallucination -- the biggest risk in zero-shot Q&A.
Math Word Problem ReasoningExplicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.