AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

AI-BOM: Building an AI-Bill of Materials

An AI bill of materials (AI-BOM) is a complete inventory of all the assets in your organization’s AI ecosystem. It documents datasets, models, software, hardware, and dependencies across the entire lifecycle of AI systems—from initial development to deployment and monitoring.

Wiz Experts Team
8 minute read

What is an AI bill of materials (AI-BOM)?

An AI bill of materials (AI-BOM) is a complete inventory of all the assets in your organization’s AI ecosystem. It documents datasets, models, software, hardware, and dependencies across the entire lifecycle of AI systems—from initial development to deployment and monitoring. By cataloging these details, an AI-BOM provides the visibility you need to secure your AI systems effectively.

If you’re familiar with a software bill of materials (SBOM), think of the AI-BOM as a tailored version specifically for AI systems. Unlike traditional software, AI involves non-deterministic models, evolving algorithms, and data dependencies, all of which require a more expansive and detailed approach. The AI-BOM captures these complexities, providing the foundation for effective AI security operations (AI SecOps).

With an AI-BOM in place, you can identify vulnerabilities, mitigate risks, and build a secure and trustworthy AI environment that supports your organization’s goals.

In this article, we’ll take a closer look at what an AI-BOM is, why it matters, and how you can operationalize it to protect your organization effectively.

Benefits of AI-BOMs

AI is transforming how organizations operate, driving innovation and efficiency across technical and non-technical teams alike. Whether your organization is developing AI models in-house, embedding third-party tools like ChatGPT, or leveraging AI-powered tools like Grammarly and Asana, AI is becoming integral to daily workflows. 

But with AI’s widespread adoption comes a new security challenge: How can you protect your organization against an ever-expanding AI attack surface? Sensitive data, code, and machine learning models are all potential targets, requiring security teams to adopt a structured approach to mitigate AI security risks一with a special focus on GenAI security.

As organizations are increasingly adopting hybrid tech stacks, the first challenge in securing AI is achieving full visibility over the AI ecosystem. The solution? Implementing an AI/ML BOM. An AIBOM isn’t just a “nice-to-have”—it’s essential if you want to secure your AI systems and manage them effectively. By providing a structured and complete view of your AI assets, the AI-BOM ensures:

  • Risk mitigation: An AI-BOM helps you identify weak spots like outdated libraries, insecure dependencies, or issues with your datasets or models. It’s also a critical tool for managing risks throughout the lifecycle of your AI systems and responding quickly to incidents.

  • Compliance: With regulations like the EU AI Act and frameworks like the NIST AI RMF taking shape, staying compliant is becoming more complex. An AI-BOM ensures you have the documentation you need to meet new requirements and prove your AI systems are secure and responsible.

  • Building trust: Transparency matters. Whether you’re addressing bias, avoiding data misuse, or simply showing stakeholders you’re on top of security, an AI-BOM builds trust by documenting how your systems work and how they’re safeguarded一assuring stakeholders that your organization is on top of AI security best practices.

An AI-BOM is your first line of defense, but it is not just about security: It also supports responsible AI by tracking modifications, documenting usage, and flagging ethical concerns. This level of detail helps you stay accountable and ensures your AI is reproducible and auditable, creating a solid foundation for secure and trustworthy AI governance across your organization.

AI-BOMs for AI Supply Chain Risk Management

AI models are rarely built from scratch. Instead, they inherit dependencies from open-source repositories, third-party model providers, and pre-trained architectures (e.g., Hugging Face, TensorFlow Hub, OpenAI APIs). Just as traditional software supply chains face vulnerabilities through third-party libraries, AI systems introduce new risks via external datasets, model weights, and training pipelines.

An AI-BOM plays a crucial role in AI supply chain security by:

  • Tracking third-party models: Documenting pre-trained models, their sources, and any modifications applied to them.

  • Identifying data lineage: Capturing details on how datasets were collected, labeled, and whether they introduce biases or security risks.

  • Managing model dependencies: Logging AI frameworks (e.g., PyTorch, TensorFlow, scikit-learn) and ensuring they are updated to secure versions.

  • Monitoring for adversarial risks: Recognizing backdoors, model drift, or data poisoning attempts that can compromise AI integrity.

Example: Suppose your AI system uses a pre-trained NLP model from an open-source repository. If that model contains undocumented vulnerabilities or was trained on biased or sensitive data, your organization could unknowingly introduce risks into production. With an AI-BOM, security teams can flag such dependencies, enforce version controls, and monitor for known vulnerabilities in AI supply chains.

Example of how Wiz maps the visibility of an AI-BOM to the Security Graph

How does an AI-BOM help with GenAI security?

GenAI solutions introduce specialized AI security risks that your organization can start tracking by gaining full visibility of your GenAI adoption within the organization. Unlike traditional predictive AI, GenAI dynamically generates text, code, and mediaーmaking your ecosystem vulnerable to data leakage, adversarial manipulation, and unmonitored dependencies.

An AI-BOM helps mitigate these risks by:

  • Detecting potential exposure of proprietary or sensitive information in GenAI inputs and outputs

  • Tracking external APIs and libraries to notify you about well-known security gaps and detect new ones

  • Identifying unauthorized model versions or bypassed model safeguards to detect model tampering

  • Monitoring against model drift and compliance risks to ensure ongoing performance and compliance

As GenAI adoption grows, embedding GenAI-specific security controls into your AI-BOM is critical for mitigating GenAI risks and maintaining trust and compliance across your AI ecosystem.

How AI-BOMs Fit into AI Security & Compliance Frameworks

AI security and compliance are rapidly evolving, with frameworks emerging to guide risk management and governance. An AI-BOM can serve as the foundation for meeting industry standards and regulatory requirements, ensuring that organizations can trace, audit, and secure their AI assets effectively.

Regulatory & Industry Alignment

  • NIST AI Risk Management Framework (NIST AI RMF)

    • AI-BOMs align with NIST AI RMF's emphasis on model governance, transparency, and continuous monitoring.

    • Helps organizations document risks across the AI lifecycle, ensuring AI deployments adhere to Trustworthy AI principles.

  • EU AI Act

    • Establishes strict transparency and risk assessment requirements for AI models.

    • An AI-BOM enables compliance by tracking high-risk AI systems, identifying proprietary components, and ensuring proper documentation for audits.

  • ISO/IEC 42001 (AI Management System Standard)

    • Introduces AI lifecycle governance similar to ISO 27001 for cybersecurity.

    • AI-BOMs support ISO 42001 compliance by providing structured documentation on model provenance, training data sources, and security controls.

  • Model Risk Management (MRM) in Financial Services

    • Banks and financial institutions follow MRM frameworks to govern AI models used in lending, fraud detection, and risk assessments.

    • AI-BOMs improve MRM by tracking model lineage, ensuring transparency, and flagging unauthorized model modifications.

Example of an AI-BOM filtered for the Azure AI services

Key components of an AI-BOM

To be effective, an AI-BOM must support different AI use cases and technologies in relevant detail while still providing an easy-to-navigate interface. Below is a comprehensive, but non-exhaustive, breakdown of the key components that an AI-BOM should include:

  • Datasets: Names, versions, formats, and any associated privacy requirements

  • Models: Information such as model names, types, algorithms, hyperparameters, training methods, and versioning details

  • Software components: Third-party libraries, frameworks, runtime environments, proprietary code, and integration points within the AI stack

  • Hardware requirements: Details about servers, GPUs, cloud infrastructure, and networking devices

  • Security specifications: Encryption methods, access controls, and governance measures

  • People and processes: Team roles, developer and owner information, and details about operational workflows like CI/CD pipelines

  • Usage documentation: Input/output specifications, intended and potential misuse cases, bias considerations, ethical implications, and licensing details

Most importantly, your AI-BOM should also account for any custom extensions or modifications specific to your organization’s needs, as well as a digital signature to ensure authenticity and integrity.

By acting as a central hub of information, your AI-BOM makes it easier to secure, manage, and adapt your AI systems as they evolve.

How do you create an AI-BOM?

Developing an AI-BOM might seem complex, but it’s a manageable process if approached strategically. While you can create one from scratch, most organizations benefit from leveraging existing frameworks and tools.

To create your AI-BOM, you can follow these steps:

  1. Start with a proven template: Use established frameworks like SPDX AI to capture key details across datasets, models, software, and hardware. These templates provide a strong foundation, saving time and effort. While there are different ways to structure an AI-BOM, the most practical and user-friendly option is to design it as an interactive catalog. This format allows for easy navigation while documenting all the essential elements—software, hardware, and associated people and processes—for each AI system in your ecosystem. 

  2. Automate where possible: Implement tools that track updates in real time, ensuring your AI-BOM stays accurate and up-to-date.

  3. Introduce advanced management: Add features like tagging (e.g., “approved,” “unreviewed,” “restricted”) and automated documentation generation for seamless updates and insights.

When managing your own AI-BOM, challenges can arise due to difficulties in maintaining accuracy in dynamic AI environments, managing third-party components, and aligning with evolving regulations. To address these challenges, consider advanced solutions like the AI-BOM offered as part of Wiz AI security posture management (AI-SPM).

Building an AI-BOM with Wiz

Creating and maintaining an AI-BOM can be a complex task, especially as AI ecosystems grow in scale and complexity. Wiz’s AI Security Posture Management (AI-SPM) platform automates and simplifies this process, providing continuous visibility, real-time risk monitoring, and compliance management for your AI assets.

An example interactive AI-BOM catalog autogenerated by Wiz AI-SPM

With agentless, cloud-native AI security, Wiz helps organizations operationalize their AI-BOM effectively through:

1. Automated Discovery and Inventory

Wiz automatically scans your AI ecosystem across cloud, on-premises, and third-party services, identifying and cataloging all AI assets. This includes:

  • Datasets: Capturing dataset names, versions, formats, lineage, and associated privacy requirements.

  • Models: Documenting model types, architectures, training datasets, hyperparameters, and fine-tuning details.

  • Software & Hardware: Identifying third-party AI frameworks, libraries, runtime dependencies, and infrastructure (GPUs, cloud compute instances).

  • Dependencies & Supply Chain Risks: Mapping model dependencies (e.g., pre-trained models from Hugging Face, OpenAI APIs) to track potential vulnerabilities.

2. Real-Time AI Risk Monitoring

Wiz continuously monitors your AI-BOM for security risks, identifying threats such as:

  • Outdated or Insecure AI Libraries: Detecting deprecated AI frameworks or vulnerable third-party packages.

  • Data Leakage Risks: Scanning GenAI inputs and outputs for exposure of sensitive data, PII, or proprietary information.

  • Model Tampering & Unauthorized Modifications: Flagging unapproved AI model versions, bypassed security safeguards, or drift in model behavior.

  • Regulatory Compliance Gaps: Highlighting AI systems that deviate from NIST AI RMF, the EU AI Act, or ISO/IEC 42001 AI governance standards.

3. Interactive and User-Friendly Interface

Wiz’s AI-BOM is designed as an interactive catalog, making it easy to navigate and manage. Key features include:

  • Tagging and Filtering: Tag assets with labels like “approved,” “unreviewed,” or “restricted” for streamlined management.

  • Search and Visualization: Quickly locate specific assets or visualize dependencies across your AI ecosystem.

  • Digital Signatures: Ensure the authenticity and integrity of your AI-BOM with built-in digital signatures.

This intuitive interface reduces the complexity of managing AI-BOMs, enabling both technical and non-technical stakeholders to collaborate effectively.

4. Seamless Integration with AI Security Posture Management (AI-SPM)

An AI-BOM is just one piece of a comprehensive AI security strategy—and Wiz integrates it into full AI security posture management (AI-SPM).

With AI-SPM, Wiz enables:

  • AI Model Drift Detection: Continuously monitor performance deviations, bias shifts, or adversarial manipulations.

  • Governance & Policy Enforcement: Apply role-based access control (RBAC), encryption, and security baselines to protect AI assets.

  • Automated Compliance Reporting: Generate audit-ready documentation for AI risk management frameworks and regulations.

Whether you’re dealing with AI-specific vulnerabilities or preparing for future regulations, Wiz has the tools to keep your organization confidently innovating with AI while maintaining robust security and compliance. 

Ready to learn more? Visit the Wiz for AI webpage, or if you prefer a live demo, we would love to connect with you.

Accelerate AI Innovation, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo