NIST AI RMF
The National Institute of Standards and Technology's AI Risk Management Framework for identifying and mitigating AI risks.
Full Definition
The NIST AI Risk Management Framework (AI RMF) is a voluntary guidance document published by the U.S. National Institute of Standards and Technology to help organizations identify, assess, and mitigate risks associated with AI systems. The framework is organized around four core functions: Govern (establishing AI risk management culture and processes), Map (contextualizing AI risks within the organization), Measure (analyzing and tracking AI risks quantitatively), and Manage (prioritizing and acting on identified risks). While not legally binding like the EU AI Act, the NIST AI RMF is widely adopted as a best-practice standard and is increasingly referenced in U.S. regulatory guidance, procurement requirements, and industry certifications. Organizations often implement NIST AI RMF alongside other compliance frameworks to demonstrate comprehensive AI risk management.
Related Terms
Compliance Framework
A structured set of regulations, standards, and guidelines that organizations must adhere to when deploying AI systems.
EU AI Act
The European Union's comprehensive legal framework for regulating AI systems based on risk classification.
AI Governance
The framework of policies, processes, and technologies used to ensure AI systems operate ethically, transparently, and in compliance with regulations.