Back to Glossary
Prompting

Chain-of-Thought Prompting

Definition

Chain-of-thought (CoT) prompting encourages LLMs to show their reasoning step-by-step before giving a final answer, dramatically improving performance on complex reasoning and math problems.

Why It Matters

For complex problems, asking for the answer directly often leads to errors. Chain-of-thought prompting breaks problems into steps, allowing the model to reason through each part. This technique can improve accuracy on math, logic, and multi-step reasoning tasks by 20-40% or more.

How It Works

Two main approaches:

  • Zero-shot CoT: Simply add β€œLet’s think step by step” to your prompt
  • Few-shot CoT: Provide examples that show reasoning steps before conclusions

The model then generates explicit reasoning traces, self-corrects along the way, and arrives at more accurate answers.

When to Use

Use chain-of-thought for: math and calculation problems, multi-step reasoning tasks, problems requiring logical deduction, tasks where you need to verify the reasoning, and complex decision-making. For simple tasks (classification, translation), CoT adds overhead without benefit.

Source

Chain-of-thought prompting enables complex reasoning capabilities in large language models by generating intermediate reasoning steps.

https://arxiv.org/abs/2201.11903