Chain of Thought Prompting
Chain of thought (CoT) prompting is a technique that instructs AI models to break down complex problems into intermediate reasoning steps before producing a final answer. Instead of asking "What is the profit margin?" and getting a potentially wrong number, you ask the model to show its work: identify revenue, subtract costs, calculate the ratio, then state the result. Research from Google Brain and others has shown that CoT prompting dramatically improves accuracy on math, logic, coding, and multi-step reasoning tasks — often by 20-40% over standard prompting. The technique works because it forces the model to allocate more computation to the problem rather than jumping to a pattern-matched answer.
There are several variants of chain-of-thought prompting. Standard CoT provides one or two worked examples showing the reasoning process, then asks the model to follow the same pattern. Zero-shot CoT achieves similar results simply by adding "Let's think step by step" to the prompt — no examples needed. This phrase has become one of the most impactful prompt engineering discoveries because it activates the model's reasoning capabilities with minimal effort. Tree of Thought (ToT) extends CoT further by having the model explore multiple reasoning paths in parallel and evaluate which one leads to the best solution. Self-consistency sampling generates multiple CoT paths and takes the majority answer, reducing the chance of a single flawed reasoning chain producing a wrong result.
Use CoT prompting whenever the task involves reasoning, calculation, multi-step logic, or analysis. It is less necessary for simple retrieval, creative writing, or formatting tasks where the model does not need to "think." Be aware that CoT increases output token usage since the model produces its reasoning alongside the answer — factor this into cost estimates. For production applications, you can instruct the model to put its reasoning in a specific tag and extract only the final answer. The key insight is that better prompts do not just tell the AI what to output — they guide how it should think.
Chain-of-Thought Prompt Examples
Copy these templates to get step-by-step reasoning from any AI model. Each demonstrates a different CoT pattern.
Math Word Problem Reasoning
Solve the following math problem. Show your reasoning step by step before giving the final answer.
Problem: {{math_problem}}
Work through this step by step:
1. Identify the known quantities
2. Determine what we need to find
3. Set up the equations or calculations
4. Solve each step, showing your work
5. Verify the answer makes sense
Step-by-step solution:Why it works: Explicit numbered steps force the model to decompose the problem rather than guessing. The verification step catches arithmetic errors before the final answer.
Logic Puzzle Solver
Solve this logic puzzle by reasoning through it carefully. Do not jump to conclusions.
Puzzle: {{puzzle_description}}
Think through this systematically:
- First, list all the constraints given in the puzzle
- Then, identify what can be determined directly from the constraints
- Next, use process of elimination to narrow down the remaining possibilities
- Finally, check that your solution satisfies every constraint
Reasoning:Why it works: Logic puzzles require constraint tracking. Asking the model to list constraints first, then apply elimination, mirrors how humans solve these problems reliably.
Code Debugging
Debug the following code. Do not just give the fix — walk through your reasoning so I understand the root cause. Language: {{language}} Code: {{code_snippet}} Expected behavior: {{expected_behavior}} Actual behavior: {{actual_behavior}} Debug step by step: 1. Read the code and restate what it's supposed to do 2. Trace through the execution mentally, noting variable states 3. Identify where the actual behavior diverges from expected 4. Explain the root cause of the bug 5. Provide the corrected code with comments on what changed
Why it works: Tracing execution step by step reveals bugs that pattern-matching misses. Restating intent first ensures the model understands the goal before diagnosing the problem.
Decision Making Framework
Help me make a decision by thinking through it systematically. Decision: {{decision_description}} Options: {{options}} Key priorities: {{priorities}} Analyze this step by step: 1. Restate the decision and what a good outcome looks like 2. For each option, list the pros and cons relative to my priorities 3. Identify risks and unknowns for each option 4. Consider second-order effects (what happens 6-12 months after choosing each option) 5. Weigh the trade-offs and recommend the best option with your reasoning Analysis:
Why it works: Multi-criteria decisions benefit from structured decomposition. Forcing second-order thinking and risk assessment prevents the model from anchoring on the most obvious choice.
Root Cause Analysis
Perform a root cause analysis on the following issue. Do not stop at the surface-level cause. Issue: {{issue_description}} Context: {{context}} Apply the 5 Whys method: 1. Why did this happen? (First-level cause) 2. Why did that cause occur? (Second-level cause) 3. Why did that happen? (Third-level cause) 4. Why? (Fourth-level cause) 5. Why? (Root cause) Then: - Summarize the root cause in one sentence - Propose 2-3 corrective actions that address the root cause, not just the symptoms - Identify what monitoring or process change would prevent recurrence Analysis:
Why it works: The 5 Whys framework forces the model to dig past symptoms to underlying causes. Without this structure, AI tends to stop at the first plausible explanation.
Ethical Reasoning
Analyze the following scenario from an ethical perspective. Consider multiple viewpoints before reaching a conclusion.
Scenario: {{scenario}}
Reason through this carefully:
1. Identify the stakeholders and their interests
2. Analyze the situation through a utilitarian lens (greatest good for the greatest number)
3. Analyze through a deontological lens (duties, rights, and rules)
4. Analyze through a virtue ethics lens (what would a person of good character do)
5. Identify where these frameworks agree and where they conflict
6. Weigh the perspectives and provide a nuanced recommendation
Ethical analysis:Why it works: Ethical questions have no single correct framework. Forcing the model to apply multiple lenses produces balanced analysis rather than defaulting to one perspective.
Recommended tools & resources
Explore CoT and other proven prompting structures.
Prompt TipsPractical techniques to get better reasoning from AI models.
Prompt BuilderBuild chain-of-thought prompts interactively.
Few-Shot PromptingCombine examples with step-by-step reasoning for best results.
GuidesIn-depth tutorials on advanced prompting techniques.
Prompt ScoreEvaluate whether your prompts use reasoning effectively.