AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

NIST AI Risk Management Framework: A tl;dr

The NIST AI Risk Management Framework (AI RMF) is a guide designed to help organizations manage AI risks at every stage of the AI lifecycle—from development to deployment and even decommissioning.

Wiz Experts Team
6 minute read

The NIST AI Risk Management Framework (AI RMF) is a guide designed to help organizations manage AI risks at every stage of the AI lifecycle—from development to deployment and even decommissioning. It provides a structured way to identify, assess, and mitigate AI risks without stifling innovation.

The rapid pace of AI adoption brings serious challenges to organizations looking to tap into the transformational power of AI: How do you ensure AI systems are reliable, ethical, and secure? Can you effectively identify and mitigate risks across the AI lifecycle? And perhaps most importantly, how do you navigate the growing maze of AI regulations while building trust with your users?

Addressing these challenges requires more than intuition—it demands a structured, industry-aligned approach. That’s where the National Institute of Standards and Technology (NIST), a global authority promoting standards for security and innovation, comes in. The NIST AI Risk Management Framework (AI RMF) aims to guide organizations in securing their AI adoption in a compliant and responsible way.

Why is AI risk management essential?

AI risk management is essential for addressing AI risks—before they can disrupt businesses or cause harm to users. Now more than ever, you need to adopt AI risk management practices. Here’s why:

  1. AI systems are ubiquitous. AI doesn’t operate in a vacuum. It’s embedded in industries that touch every aspect of our lives, from diagnosing medical conditions to approving loans. Without strong safeguards, even minor oversights could lead to significant fallout. A real-world example: Tesla's Autopilot feature has caused multiple incidents, including crashes and fatalities, due to misinterpretations of the vehicle's surroundings.

  2. AI systems are increasingly complex. Advanced AI models, such as large language models (LLMs), operate in ways that are often opaque to users. This is referred to as the “black-box” problem. Lack of transparency makes it difficult to secure, audit, or trust AI decisions—especially when it comes to GenAI security. A case in point: Deepfake technology has been used to create realistic but completely fake videos, causing public figures to be misrepresented.

  3. AI regulations are becoming more pressing. Governments are stepping in with strict requirements that mandate transparency and risk mitigation for AI systems. The EU's AI Act and California’s Consumer Privacy Act (CCPA) are two of the major AI regulations that organizations need to abide by. 

  4. Aligning AI with organizational values is a must. Mismanaged AI can lead to ethical missteps that erode public trust. For example, in January 2025, Apple faced backlash over its AI-powered news summarization tool, which misrepresented sensitive topics, prompting the company to pause the feature and work on improvements.

It’s clear that unaddressed AI risks can disrupt businesses and even cause harm to users. Standardizing AI risk management helps organizations by providing clear guidelines that ensure regulatory compliance, maintain ethical standards, and enhance public trust.

A closer look at the NIST AI RMF

You can think of the NIST AI RMF as a playbook for responsible AI adoption. It’s designed to help businesses stay ahead of risks while ensuring AI systems are secure, ethical, and transparent. 

Keep in mind that the framework is entirely voluntary for adoption and so NIST made it adaptable, rather than a “one-size-fits-all” rulebook—meaning organizations of all sizes, across any industry and country, can tailor it to their specific needs.

Why was the NIST AI RMF created?

NIST created the AI RMF in response to the increasing complexity of AI systems and the growing need for standards following an October 30, 2023, AI Executive Order. By facilitating collaboration between government, industry, and academia, NIST has designed the framework with the following key objectives:

  • To establish consistent, actionable standards for managing AI risks

  • To enable organizations to identify and address potential threats before they escalate

  • To build a foundation for ethical, secure, and transparent AI practices that strengthen public trust

When was the NIST AI RMF created?

The first draft of the NIST AI RMF debuted in March 2022, with the final version rolling out in January 2023. 

Figure 1: The official NIST AI RMF timeline (Source: NIST)

Staying ahead of the curve, NIST also introduced the Generative AI Profile in July 2024, catering specifically to the challenges posed by rapidly evolving Gen AI systems. 

What is the structure of the NIST AI RMF?

The NIST AI RMF is divided into two major parts that together guide your organization in managing AI risks across the entire lifecycle. 

The framework also includes supporting materials to help you apply these guidelines effectively, specifically:

  • AI RMF Playbook: A step-by-step guide for implementing the NIST AI RMF

  • AI RMF Roadmap: A timeline for adopting AI risk management practices

  • AI RMF Crosswalks: Tools to map existing practices to the NIST AI RMF

  • Perspectives: Different views of AI risk management tailored for specific sectors or needs

  • Use Cases: Real-world examples of how organizations are operationalizing the NIST AI RMF

Next, let’s look at the two parts of the NIST AI RMF in some more detail. 

NIST AI RMF - Part 1. Trusted AI Systems and Organizational Risks

Part 1 of the NIST AI RMF focuses on defining what makes an AI system “trusted.” It lays out key principles like reliability, transparency, fairness, accountability, and security, and also introduces common AI risks such as: 

  • Bias: AI Algorithms reflecting unintentional discrimination

  • Privacy violations: AI pipelines mishandling sensitive data

  • Security gaps: AI systems’ vulnerabilities that attackers can exploit

  • …and many others.

The goal of Part 1 is to help your organization identify, address, and reduce AI risks by promoting AI solutions that are transparent, auditable, and explainable一ensuring that AI systems are not only effective but also ethical and secure.

NIST AI RMF - Part 2. Framework’s Four Core Functions 

Part 2 of the AI RMF introduces the four core functions and categories that the framework organizes its actionable guidelines under:

Core FunctionWhat it helps you doWhy it matters
GovernDefine governance structures, assign roles, and outline responsibilities for managing AI risksHelps align AI systems with standards, regulations, and organizational values
Map Identify and assess risks throughout the AI lifecycleFosters proactive identification of risks to promote AI security, and ensures AI aligns with governance practices
MeasureQuantify and assess the performance, effectiveness, and risks of AI systemsEnsures that AI remains stable, efficient, and compliant over time
ManageDevelop strategies for mitigating risks and ensuring AI systems remain compliant and secureFacilitates continuous monitoring, auditing, and improvement to minimize risk exposure

The goal of Part 2 is to provide you with a structured approach to AI risk management by organizing essential practices under these core functions, enabling your organization to integrate risk management into your AI systems and continuously enhance your AI solutions.

How can you adopt the NIST AI RMF?

Adopting the NIST AI RMF isn’t a one-size-fits-all journey, but with a systematic approach, you can make it work for your organization.

With the knowledge you’ve gathered about what the NIST AI RMF is and why it exists, you can use this practical 5-step approach to adopting the framework:

  • 1.Understand your AI ecosystem: Start by creating an AI bill of materials (AI-BOM) to gain visibility into your AI assets. This inventory helps ensure you understand the full scope of your AI systems and where potential vulnerabilities might be.

  • 2. Assess and prioritize risks: Use the framework’s “Map” function to identify risks across your AI systems. For example, does your customer-facing AI chatbot align with privacy laws? Are your models aligned with ethical guidelines?

  • 3. Determine your tier of maturity: NIST defines four tiers of AI maturity:

    • Tier 1 - Partial: Limited risk awareness

    • Tier 2 - Risk-Informed: A baseline understanding of risks and mitigations

    • Tier 3 - Repeatable: Risk management is systematic and documented

    • Tier 4 - Adaptive: AI risk management is fully integrated and evolving

By knowing your organization’s maturity level, you can benchmark your current AI security capabilities and prioritize future improvements.

  • 4. Integrate and act: Align the NIST AI RMF with your AI lifecycle. For example:

    • Apply the “Govern” function to establish clear accountability for your Gen AI tools.

    • Use the “Measure” function to continuously test and refine AI outputs for fairness and accuracy.

You can tailor (and expand) the AI RFM Playbook to create your own set of policies and standards with clear responsibilities and prioritization.

  • 5. Monitor, learn, and iterate: AI systems evolve, and so should your approach to managing them. Regular updates—like incorporating NIST’s Generative AI Profile for GenAI security and compliance—ensure your risk management strategy stays ahead of the curve.

For organizations seeking to accelerate this process, AI-SPM offers you a comprehensive, proactive solution for AI risk management in cloud environments. WIZ AI-SPM simplifies AI risk management with features like agentless AI-BOM, automated risk assessments, and attack path analysis, all designed to align with the NIST AI RMF.

What’s next?

The NIST AI RMF offers a clear path for managing the complex landscape of AI risks and regulations. By adopting this framework, organizations can ensure their AI systems remain ethical, secure, and compliant with evolving global regulations.

Wiz is committed to helping organizations enhance their AI security posture and operationalize AI risk management frameworks effectively, including NIST compliance, through our AI-SPM offering.

Ready to learn more? Visit the Wiz for AI webpage, or if you prefer a live demo, we would love to connect with you.

Accelerate AI Innovation, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo