EU AI Act
The European Union's comprehensive legal framework for regulating AI systems based on risk classification.
Full Definition
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence regulation, adopted by the European Union in 2024 with provisions taking effect through 2026. It classifies AI systems into four risk tiers — Unacceptable (banned), High-Risk (strict requirements), Limited Risk (transparency obligations), and Minimal Risk (largely unregulated). For organizations deploying autonomous AI agents, the Act mandates continuous risk management, automatic logging of decisions, technical documentation, transparency to users, and meaningful human oversight. Non-compliance penalties can reach up to €35 million or 7% of global annual turnover. The Act applies extraterritorially to any organization whose AI system's output is used within the EU.
Related Terms
Compliance Framework
A structured set of regulations, standards, and guidelines that organizations must adhere to when deploying AI systems.
Audit Trail
A chronological, immutable record of every decision, action, and data access made by an AI agent.
Human-in-the-Loop
A design pattern where AI systems require human review, approval, or intervention at critical decision points.