AI Hallucination
When an AI model generates information that appears plausible but is factually incorrect, fabricated, or unsupported by its input data.
Full Definition
An AI hallucination occurs when a language model produces outputs that sound confident and coherent but contain fabricated facts, non-existent citations, incorrect statistics, or logical inconsistencies. Hallucinations arise because LLMs generate text probabilistically based on patterns in training data rather than from a verified knowledge base. In autonomous agent systems, hallucinations are particularly dangerous because agents may act on fabricated information — executing trades based on non-existent market data, citing fictional legal precedents, or providing incorrect medical guidance. Detection techniques include semantic consistency checking, token probability analysis, cross-reference validation, and multi-agent debate.
Related Terms
Anomaly Detection
The automated identification of unusual patterns or behaviors in AI agent operations that deviate from expected norms.
Cognitive Firewall
A governance layer that intercepts and evaluates AI agent reasoning and outputs before actions are executed.
AI Agent
An autonomous software system that uses AI models to perceive its environment, make decisions, and take actions to achieve goals.