Multi-Agent System
An architecture where multiple AI agents collaborate, specialize, and coordinate to accomplish complex tasks.
Full Definition
A Multi-Agent System (MAS) is an architecture where multiple autonomous AI agents work together, each with specialized capabilities, to accomplish complex tasks that would be difficult or impossible for a single agent. Examples include research pipelines where different agents handle search, analysis, and synthesis; customer service systems with routing, specialist, and escalation agents; and governance platforms where analyst, compliance, and security agents each evaluate incidents from their domain perspective. Multi-agent systems introduce unique governance challenges including inter-agent communication monitoring, collective behavior analysis, conflict resolution, resource coordination, and cascading failure prevention. Effective governance of multi-agent systems requires holistic monitoring that tracks both individual agent behavior and emergent system-level patterns.
Related Terms
AI Agent
An autonomous software system that uses AI models to perceive its environment, make decisions, and take actions to achieve goals.
Autonomous AI
AI systems that can independently perceive, decide, and act without continuous human oversight or approval.
AI Governance
The framework of policies, processes, and technologies used to ensure AI systems operate ethically, transparently, and in compliance with regulations.