Chain of Thought
A technique where AI models explain their step-by-step reasoning process, improving both output quality and explainability.
Full Definition
Chain of Thought (CoT) is a prompting and reasoning technique where AI language models articulate their intermediate reasoning steps before arriving at a final answer. For example, rather than directly answering a math problem, the model would show each computation step. CoT improves both the accuracy of AI outputs (by encouraging systematic reasoning) and the explainability of AI decisions (by making the reasoning process visible and auditable). In governance contexts, chain of thought traces provide critical evidence for audit trails — they enable forensic reconstruction of why an AI agent made a particular decision. However, CoT is not guaranteed to reflect the model's actual internal computation, which is why governance platforms combine CoT analysis with behavioral monitoring and output validation.
Related Terms
Explainability
The ability to understand and communicate why an AI system made a specific decision or produced a particular output.
Audit Trail
A chronological, immutable record of every decision, action, and data access made by an AI agent.
AI Agent
An autonomous software system that uses AI models to perceive its environment, make decisions, and take actions to achieve goals.