Chain of Thought (CoT) is a prompting and reasoning technique in large language models (LLMs) that encourages the model to generate intermediate reasoning steps before arriving at a final answer. Instead of directly answering a question, the model is guided to "think aloud" — mimicking how humans solve complex problems by breaking them down into smaller, logical steps.
For example, instead of prompting:
What is 27 × 43?
You might prompt:
Let’s think step by step: First, break 27 into 20 and 7…
This structured, step-by-step output improves model accuracy on tasks requiring multi-step reasoning, such as:
CoT prompting can be done in two ways:
By guiding LLMs to articulate their reasoning, CoT helps reduce errors caused by rushed or shallow predictions. It also enhances transparency — allowing humans to inspect the logic behind a model’s answer.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More