Anomaly Detection
The automated identification of unusual patterns or behaviors in AI agent operations that deviate from expected norms.
Full Definition
Anomaly detection in AI governance refers to the continuous, automated monitoring of AI agent behavior to identify deviations from expected operational patterns. This includes detecting hallucinations (fabricated outputs), bias (unfair decision patterns), prompt injection attacks (malicious manipulation), policy violations (breaching organizational guardrails), and behavioral drift (gradual changes in agent behavior over time). Modern governance platforms use a combination of statistical methods, semantic analysis, and specialized AI evaluators to score and classify anomalies by severity, enabling automated escalation and human-in-the-loop review for high-risk incidents.
Related Terms
AI Hallucination
When an AI model generates information that appears plausible but is factually incorrect, fabricated, or unsupported by its input data.
Behavioral Drift
Gradual, often undetected changes in an AI agent's decision patterns or outputs over time.
Prompt Injection
A security attack where malicious instructions are embedded in inputs to manipulate an AI agent's behavior.