Chain of Thought (CoT) is a prompting technique where the model is asked to produce reasoning steps before the final answer. "Let's think step by step" was the original phrase that triggered the behaviour; modern frontier models often produce CoT reasoning by default.
CoT improves accuracy on math, multi-step logic, and any problem that benefits from intermediate work. It also makes errors more debuggable: you can see where the reasoning went wrong. The OpenAI o1 model family takes this further with extended internal reasoning before responding.