NIST AI Risk Management Framework: A Comprehensive Overview

By Morgan Sullivan

Senior Content Marketing Manager II

September 8, 2023•7 min read

Share this article

At a glance

  • As AI quickly becomes ubiquitous across all industries, it’s critical that companies identify, assess, and manage AI risks appropriately—and the NIST AI Risk Management Framework (RMF) is a powerful tool to do just that.
  • Released on January 26, 2023, the NIST AI Risk Management Framework is a flexible set of guidelines meant to help organizations systematically define and manage the risks posed by using AI systems. 
  • Read this guide to explore how the NIST AI RMF defines a trustworthy AI system, NIST's actionable guidance for safely deploying AI, plus five steps for applying the framework at your organization.

Table of contents

What is the NIST AI Risk Management Framework?

Released by the National Institute of Standards and Technology (NIST) in January 2023, the NIST AI Risk Management Framework is a set of industry-agnostic guidelines meant to help organizations assess and manage the risks associated with the implementation and use of AI systems. 

The framework’s main goal is to promote the responsible development, deployment, and use of AI, while ensuring that ethics, privacy, and security are integral considerations throughout the AI system's lifecycle.

Designed to be flexible and adaptable, the NIST AI RMF can be used across a wide range of industries and AI applications—helping to inform an organization's approach to AI governance. The framework takes a step-by-step approach, including:

  • Identifying and assessing the potential risks associated with AI
  • Implementing controls to mitigate these risks
  • Monitoring and assessing the effectiveness of these controls
  • Continuous improvement of the system

Applying the NIST AI Risk Management Framework is meant to be cyclical and iterative—helping organizations ensure their AI deployments remain trustworthy as they evolve over time. 

How does the NIST AI Risk Management Framework define a trustworthy AI system?

One important piece of good AI governance is ensuring that an AI system is trustworthy and reliable. The NIST AI Risk Management Framework defines several variables that will help organizations assess the trustworthiness of an AI system. 

Valid and reliable: The AI system works as intended and consistently produces accurate results—a critical component of whether the AI’s outputs can be trusted for decision-making processes.

Safe, secure, and resilient: Trustworthy AI systems should be designed with robust security measures to prevent unauthorized access, misuse, or malicious attacks. They should also have the ability to quickly recover and maintain functionality during and after any disruptive events.

Accountable and transparent: Accountability refers to a system's auditability, meaning it should be clear who is responsible for the system's actions. Transparency means having a clear view into the system's processes—an AI system’s decisions should be clear and understandable to humans, not hidden in a 'black box'.

Explainable and interpretable: The AI system should be able to provide clear, understandable explanations for its actions or decisions, which is critical for user trust and system accountability.

Privacy enhanced: A trustworthy AI system respects user privacy by implementing measures to protect sensitive data, while ensuring its usage complies with relevant laws and regulations.

Fair, with harmful biases managed: The AI system should be designed to avoid unfair biases or discrimination. This includes actively working to identify and rectify any harmful biases in the system's design or outputs.

Actionable guidance from the NIST AI Risk Management Framework

The NIST AI Risk Management Framework is loosely split into two segments.

The first covers how to frame AI-related risks at an organizational level, while defining the characteristics of reliable AI systems. The second segment offers more actionable guidance on how to implement the framework effectively, giving your organization the ability to continually map and mitigate risk throughout an AI system’s life cycle.

This guidance is broken down into four phases: govern, map, measure, and manage. 

Govern 

The govern function is a vital part of AI risk management, permeating every aspect of the process and being across several layers.

The first layer is the overarching policies that shape an organization’s mission, culture, values, goals, and risk tolerance. From there, technical teams work to implement and operationalize those policies—providing robust documentation to enhance accountability. Meanwhile, senior leaders help to establish a culture and tone of consistent and responsible risk management. 

Far from being an independent element, governance should be integrated into all other functions of the NIST AI Risk Management Framework, especially those related to compliance or evaluation. Strong governance can enhance internal practices and standards, fostering a robust organizational risk culture.

Map

This function helps define and contextualize the risk associated with an AI system. Given the inherent complexity of most AI deployments, it’s not uncommon for information to be siloed amongst different teams. Teams managing one part of the process may not have oversight or control over other parts. Mapping aims to bridge these knowledge gaps and mitigate potential risks. 

By proactively identifying, assessing, and addressing potential sources of negative risk, all stakeholders can feel better equipped to make decisions. The outcomes of the mapping process also serves as a key foundation for the next two stages: measure and manage.

Measure

The measure function uses different tools, techniques, and methodologies—quantitative, qualitative, or a mixture of both—to assess, analyze, benchmark, and monitor AI risk and its related impacts. It also involves documenting facets of system functionality, social impact, and trustworthiness.

The measure function should adopt or develop processes that include rigorous software testing and performance assessment methodologies, complete with measures of uncertainty, benchmarks for performance comparison, and formalized reporting and documentation of results.

Independent review processes can enhance testing effectiveness to better mitigate internal biases and potential conflicts of interest.

Manage

The manage function involves regularly allocating resources towards addressing the risks identified and measured in the previous steps. Mitigation measures should include details about how the organization will respond to, recover from, and communicate about an incident. 

The goal of the manage function is to use information gathered throughout the previous steps to reduce the likelihood of system failures or other issues. 

5 steps for applying the NIST AI Risk Management Framework

Organizations can apply the NIST AI Risk Management Framework by following these five steps.

1) Define the AI system's purpose and goals

The first step in building a trustworthy AI system using the NIST AI RMF is to clearly define the system's purpose and goals. This process allows an organization to identify the specific risks associated with the intended use of the AI system. For example, the risk associated with an AI system used for credit scoring differs from that used in autonomous vehicles.

2) Identify the data sources used in the AI system and analyze them for biases

The second step involves identifying all the data sources feeding into the AI system and conducting a comprehensive bias analysis. The NIST AI RMF provides guidance on how to carry out this process effectively, focusing on understanding the context of the data, identifying potential bias, and taking steps to mitigate it.

3) Implement the NIST AI RMF actionable guidelines during development

The third step calls for the implementation of the AI RMF's actionable guidelines during the development phase of the AI system. This involves incorporating the AI RMF's four functions - govern, map, measure, and manage - into the system's development process. This step ensures risks are managed proactively, rather than reactively.

4) Monitor and test the system regularly

Regular monitoring and testing of the system are essential to ensure that it functions as expected and continues to meet the defined performance metrics. The AI RMF calls for continuous monitoring as a key part of managing risks in AI systems.

5) Continuously improve the system based on findings from monitoring and testing

The final step in the process involves using the insights gained from monitoring and testing to continuously improve the AI system. This highlights the AI RMF's focus on iterative improvement as a key part of managing AI risks effectively. This step ensures that the system continues to adapt and evolve in line with changing data and environment conditions.

Conclusion

Building strong systems for AI governance is no small feat, but it's essential to ensure the system works safely, ethically, and transparently. The NIST Risk Management Framework provides actionable guidance that helps organizations build trustworthy AI systems that function effectively while protecting both company and consumer data.


Learn more about AI governance


About Transcend

Transcend is the governance layer for enterprise data—helping companies automate and future-proof their privacy compliance and implement robust AI governance across an entire tech stack.

Transcend Pathfinder gives your company the technical guardrails to adopt new AI technologies with confidence. While Transcend Data Mapping goes beyond observability to power your privacy program with smart governance suggestions.

Ensure nothing is tracked without user consent using Transcend Consent, automate data subject request workflows with Privacy Requests, and mitigate risk with smarter privacy Assessments.


References


By Morgan Sullivan

Senior Content Marketing Manager II

Share this article