Action Blocking
The real-time interception and prevention of an AI agent's tool call or action before it executes, based on policy evaluation.
Full Definition
Action Blocking is a governance capability that intercepts an AI agent's intended action — a tool invocation, API call, database write, external service request, or content output — and prevents execution when the action violates defined policies or exceeds configured risk thresholds. Unlike output filtering (which evaluates text after generation), action blocking operates at the structured tool-call level, evaluating the actual parameters the agent intends to pass to external systems. This distinction is critical: an agent may generate a reasonable-sounding message while simultaneously attempting a policy-violating backend operation that output filtering would never detect. Effective action blocking evaluates authorization (is this agent permitted to call this tool?), parameters (are the values within allowed ranges?), and context (does the full operation make sense given the agent's chartered scope?). Blocked actions are logged with full context, and high-risk blocks can automatically escalate to human reviewers via Guard Mode.
Related Terms
Cognitive Firewall
A governance layer that intercepts and evaluates AI agent reasoning and outputs before actions are executed.
Guard Mode
An operational mode where high-risk AI agent actions are paused and routed to human reviewers for approval before execution.
Guardrails
Configurable policy constraints that define the boundaries of acceptable AI agent behavior and automatically enforce limits.
AI Agent Assurance
A practice that goes beyond monitoring to actively prevent unsafe, non-compliant, or harmful AI agent behavior before it reaches users or executes on systems.