Senior Content Marketing Manager II
September 8, 2023â˘7 min read
Released by the National Institute of Standards and Technology (NIST) in January 2023, the NIST AI Risk Management Framework is a set of industry-agnostic guidelines meant to help organizations assess and manage the risks associated with the implementation and use of AI systems.Â
The frameworkâs main goal is to promote the responsible development, deployment, and use of AI, while ensuring that ethics, privacy, and security are integral considerations throughout the AI system's lifecycle.
Designed to be flexible and adaptable, the NIST AI RMF can be used across a wide range of industries and AI applicationsâhelping to inform an organization's approach to AI governance. The framework takes a step-by-step approach, including:
Applying the NIST AI Risk Management Framework is meant to be cyclical and iterativeâhelping organizations ensure their AI deployments remain trustworthy as they evolve over time.Â
One important piece of good AI governance is ensuring that an AI system is trustworthy and reliable. The NIST AI Risk Management Framework defines several variables that will help organizations assess the trustworthiness of an AI system.Â
Valid and reliable: The AI system works as intended and consistently produces accurate resultsâa critical component of whether the AIâs outputs can be trusted for decision-making processes.
Safe, secure, and resilient: Trustworthy AI systems should be designed with robust security measures to prevent unauthorized access, misuse, or malicious attacks. They should also have the ability to quickly recover and maintain functionality during and after any disruptive events.
Accountable and transparent: Accountability refers to a system's auditability, meaning it should be clear who is responsible for the system's actions. Transparency means having a clear view into the system's processesâan AI systemâs decisions should be clear and understandable to humans, not hidden in a 'black box'.
Explainable and interpretable: The AI system should be able to provide clear, understandable explanations for its actions or decisions, which is critical for user trust and system accountability.
Privacy enhanced: A trustworthy AI system respects user privacy by implementing measures to protect sensitive data, while ensuring its usage complies with relevant laws and regulations.
Fair, with harmful biases managed: The AI system should be designed to avoid unfair biases or discrimination. This includes actively working to identify and rectify any harmful biases in the system's design or outputs.
The NIST AI Risk Management Framework is loosely split into two segments.
The first covers how to frame AI-related risks at an organizational level, while defining the characteristics of reliable AI systems. The second segment offers more actionable guidance on how to implement the framework effectively, giving your organization the ability to continually map and mitigate risk throughout an AI systemâs life cycle.
This guidance is broken down into four phases: govern, map, measure, and manage.Â
The govern function is a vital part of AI risk management, permeating every aspect of the process and being across several layers.
The first layer is the overarching policies that shape an organizationâs mission, culture, values, goals, and risk tolerance. From there, technical teams work to implement and operationalize those policiesâproviding robust documentation to enhance accountability. Meanwhile, senior leaders help to establish a culture and tone of consistent and responsible risk management.Â
Far from being an independent element, governance should be integrated into all other functions of the NIST AI Risk Management Framework, especially those related to compliance or evaluation. Strong governance can enhance internal practices and standards, fostering a robust organizational risk culture.
This function helps define and contextualize the risk associated with an AI system. Given the inherent complexity of most AI deployments, itâs not uncommon for information to be siloed amongst different teams. Teams managing one part of the process may not have oversight or control over other parts. Mapping aims to bridge these knowledge gaps and mitigate potential risks.Â
By proactively identifying, assessing, and addressing potential sources of negative risk, all stakeholders can feel better equipped to make decisions. The outcomes of the mapping process also serves as a key foundation for the next two stages: measure and manage.
The measure function uses different tools, techniques, and methodologiesâquantitative, qualitative, or a mixture of bothâto assess, analyze, benchmark, and monitor AI risk and its related impacts. It also involves documenting facets of system functionality, social impact, and trustworthiness.
The measure function should adopt or develop processes that include rigorous software testing and performance assessment methodologies, complete with measures of uncertainty, benchmarks for performance comparison, and formalized reporting and documentation of results.
Independent review processes can enhance testing effectiveness to better mitigate internal biases and potential conflicts of interest.
The manage function involves regularly allocating resources towards addressing the risks identified and measured in the previous steps. Mitigation measures should include details about how the organization will respond to, recover from, and communicate about an incident.Â
The goal of the manage function is to use information gathered throughout the previous steps to reduce the likelihood of system failures or other issues.Â
Organizations can apply the NIST AI Risk Management Framework by following these five steps.
The first step in building a trustworthy AI system using the NIST AI RMF is to clearly define the system's purpose and goals. This process allows an organization to identify the specific risks associated with the intended use of the AI system. For example, the risk associated with an AI system used for credit scoring differs from that used in autonomous vehicles.
The second step involves identifying all the data sources feeding into the AI system and conducting a comprehensive bias analysis. The NIST AI RMF provides guidance on how to carry out this process effectively, focusing on understanding the context of the data, identifying potential bias, and taking steps to mitigate it.
The third step calls for the implementation of the AI RMF's actionable guidelines during the development phase of the AI system. This involves incorporating the AI RMF's four functions - govern, map, measure, and manage - into the system's development process. This step ensures risks are managed proactively, rather than reactively.
Regular monitoring and testing of the system are essential to ensure that it functions as expected and continues to meet the defined performance metrics. The AI RMF calls for continuous monitoring as a key part of managing risks in AI systems.
The final step in the process involves using the insights gained from monitoring and testing to continuously improve the AI system. This highlights the AI RMF's focus on iterative improvement as a key part of managing AI risks effectively. This step ensures that the system continues to adapt and evolve in line with changing data and environment conditions.
Building strong systems for AI governance is no small feat, but it's essential to ensure the system works safely, ethically, and transparently. The NIST Risk Management Framework provides actionable guidance that helps organizations build trustworthy AI systems that function effectively while protecting both company and consumer data.
Transcend is the governance layer for enterprise dataâhelping companies automate and future-proof their privacy compliance and implement robust AI governance across an entire tech stack.
Transcend Pathfinder gives your company the technical guardrails to adopt new AI technologies with confidence. While Transcend Data Mapping goes beyond observability to power your privacy program with smart governance suggestions.
Ensure nothing is tracked without user consent using Transcend Consent, automate data subject request workflows with Privacy Requests, and mitigate risk with smarter privacy Assessments.
Senior Content Marketing Manager II