Prompt Engineering: Chain-of-Thought vs Few-Shot
Two techniques. One goal: make the model think before it speaks.
Most prompts fail not because the model is dumb, but because the instructions are ambiguous. Chain-of-Thought (CoT) and Few-Shot prompting are the two most reliable ways to ground a model's reasoning.
Chain-of-Thought (CoT)
CoT instructs the model to show its work. Instead of asking "What is 148 × 237?", you append "Let's think step by step."
When to use CoT
- Math and logic: Any problem with intermediate steps.
- Multi-hop reasoning: Questions that require connecting two or more facts.
- Debugging: Asking the model to trace through code execution.
How to write a CoT prompt
Solve the following problem. Show your reasoning step by step before giving the final answer.
Problem: {{problem}}
Step 1:
The key phrase is "step by step." It forces the model to allocate more tokens to reasoning, which dramatically improves accuracy on complex tasks.
Few-Shot Prompting
Few-Shot gives the model examples of the input/output format you want. It is pattern matching at scale.
When to use Few-Shot
- Format adherence: JSON, XML, or custom schema output.
- Tone calibration: Mimicking a specific writing style.
- Classification: Labeling data with consistent criteria.
How to write a Few-Shot prompt
Classify the sentiment of the following reviews.
Examples:
Review: "The battery died after two hours." → Negative
Review: "Best purchase I've made this year." → Positive
Now classify:
Review: "It works, but the screen is dim." →
The Hybrid Approach
The strongest prompts often combine both. Use Few-Shot to lock the output format, then add "Let's think step by step" to improve reasoning quality.
Key Takeaway
CoT improves how the model thinks. Few-Shot improves what the model produces. Use CoT for reasoning tasks, Few-Shot for format tasks, and both when you need correct reasoning in a strict format.