Chain-of-Thought
Intent
Prompt the LLM to show its intermediate reasoning steps before arriving at a final answer, dramatically improving accuracy on complex tasks.
Problem
LLMs asked to answer complex questions directly often skip critical reasoning steps and produce wrong answers. Math problems, multi-step logic, and nuanced analysis all suffer when the model jumps straight to a conclusion. The model has the capability to reason correctly, but it needs to be prompted to actually show its work.
Solution
Include instructions like 'Think step by step' or 'Show your reasoning' in the prompt. This forces the model to decompose its reasoning into explicit intermediate steps, making each step more likely to be correct. The generated reasoning chain serves as a scaffold that guides the model to the right conclusion. Zero-shot CoT simply adds 'Let's think step by step.' Few-shot CoT provides examples of reasoning chains for the model to follow.
Diagram
Standard: Question → [LLM] → Answer (often wrong) CoT: Question → [LLM: Step 1 → Step 2 → Step 3 → ...] → Answer (more accurate)
When to Use
- Mathematical reasoning and word problems
- Multi-step logic or analysis tasks
- Tasks where accuracy is more important than speed
- Any complex task where you'd expect a human to think through steps
When NOT to Use
- Simple factual lookups or retrieval tasks
- When latency is critical and the task is straightforward
- Creative tasks where step-by-step reasoning constrains output
Pros & Cons
Pros
- Dramatically improves accuracy on reasoning tasks (often 20-40% gains)
- Makes model reasoning transparent and debuggable
- Works with any LLM — no fine-tuning required
- Zero-shot version requires no examples
Cons
- Increases token usage and latency
- Reasoning chains can be confidently wrong
- May not help on tasks that don't require multi-step reasoning
- Verbose output needs parsing to extract the final answer
Implementation Steps
- 1Identify tasks where the model makes errors due to insufficient reasoning
- 2Add 'Think step by step' or 'Show your reasoning before answering' to prompts
- 3For few-shot CoT, provide 2-3 examples with detailed reasoning chains
- 4Parse the output to extract the final answer from the reasoning chain
- 5Evaluate whether CoT actually improves accuracy for your specific task
Real-World Example
Math Word Problem
Without CoT: 'Roger has 5 tennis balls. He buys 2 cans of 3. How many does he have?' → '11' (wrong). With CoT: 'Roger starts with 5 balls. He buys 2 cans × 3 balls = 6 new balls. Total: 5 + 6 = 11.' The reasoning makes each step explicit and verifiable.
Solve this problem step by step. Show your reasoning at each stage
before giving the final answer.
Question: A store sells notebooks for $4 each and pens for $1.50 each.
Maria bought 3 notebooks and some pens, spending $19.50 total.
How many pens did she buy?
Let's think step by step:Solve math problems by showing step-by-step reasoning.
Q: Tom has 3 boxes with 5 apples each. He gives away 7 apples. How many are left?
A: Step 1: Total apples = 3 boxes x 5 apples = 15 apples.
Step 2: After giving away 7: 15 - 7 = 8 apples.
Answer: 8
Q: A train travels at 60 mph for 2.5 hours, then 80 mph for 1.5 hours.
What is the total distance?
A: Step 1: Distance at 60 mph = 60 x 2.5 = 150 miles.
Step 2: Distance at 80 mph = 80 x 1.5 = 120 miles.
Step 3: Total = 150 + 120 = 270 miles.
Answer: 270 miles
Q: A bakery sells cupcakes for $3 and cookies for $2. Sarah bought 5
cupcakes and some cookies, spending $27 total. How many cookies?