AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

AI Security Tools: The Open-Source Toolkit

We’ll take a deep dive into the MLSecOps tools landscape by reviewing the five foundational areas of MLSecOps, exploring the growing importance of MLSecOps for organizations, and introducing six interesting open-source tools to check out

6 minutes read

What are the best tools to secure AI? If you search for the answer online, you’ll notice a sizable knowledge gap. While plenty of information exists about SecOps tools powered by AI, not much is available about security tools powering SecOps operations for AI and machine learning (ML) applications. That’s where this article comes in. 

In this blog post, we’ll take a deep dive into the MLSecOps tools landscape by reviewing the five foundational areas of MLSecOps, exploring the growing importance of MLSecOps for organizations, and introducing six interesting open-source tools to check out—plus the AI security posture management solution offered by Wiz.

An intro to MLSecOps

MLSecOps, or AISecOps, is a new discipline that aims to define SecOps processes and best practices tailored specifically to securing ML and AI pipelines at scale. Combining key elements of SecOps and MLOps, MLSecOps emerged as a necessary augmentation to traditional DevSecOps, addressing the distinctive challenges of AI applications and MLOps.

What is MLSecOps?

Securing AI applications is the goal of MLSecOps, which is a field that first gained a wider audience in 2023.

MLSecOps is still very new (hence, the lack of information online!), but many resources are being championed by the MLSecOps Community. One of these contributions is the definition of five foundational areas of MLSecOps, including objectives that security teams can focus their efforts on: 

  1. Supply chain vulnerability: Assess the security of the entire AI supply chain.

  2. Model provenance: Trace the origin, lineage, and evolution of AI models throughout their life cycle.

  3. Governance, risk, and compliance (GRC): Establish policies, procedures, and controls for adherence to internal and external regulatory standards. 

  4. Trusted AI: Offer AI systems that are transparent, fair, and accountable to both internal stakeholders and external users. 

  5. Adversarial machine learning: Secure AI systems by testing against adversarial attacks, especially those aimed to influence the model’s behavior.

(Keep in mind that this is the first iteration of MLSecOps objectives, and you can expect the list to grow over time.)

The MLSecOps Community’s list addresses two fundamental aspects of AI applications: the reliance on open-source or third-party providers for datasets, libraries, frameworks, models, infrastructure, and solutions; and the non-deterministic nature of AI models, which makes them difficult to understand and protect as well as impossible to fully control. By focusing on these five areas of AI security, businesses can release AI applications that are safe for both the organization and users. 

How are organizations embracing MLSecOps?

MLSecOps is a specialized discipline that requires ad-hoc solutions. 

While some large organizations have already embraced MLSecOps, most organizations are just beginning—or considering beginning—their journey. Given the technical knowledge and resources necessary to get started with MLSecOps, it makes sense that most businesses are starting from square one. 

Still, considering that MLSecOps is expected to become indispensable for all organizations as AI adoption keeps increasing—and considering that new regulations, such as the EU AI Act and the 2023 Executive Order on AI security, are coming into force—security teams need to make protecting AI applications a priority.

The top 6 open-source AI security tools

Choosing the right tools is the best way to bolster your AI security posture. Below you can learn about some of the most interesting open-source AI security tools available. These tools have been selected as widely applicable to a variety of AI models and frameworks, but other specialized open-source tools also exist, such as TensorFlow Model Analysis, and these are worth researching in order to meet your organization’s unique needs. 

When evaluating the adoption of these open-source tools, keep in mind that they may have limited maintenance and support. Your safest choice is to rely on third-party security providers—you can learn about Wiz’s AI security posture management at the end of the article.

  1. NB Defense

  2. Adversarial Robustness Toolbox

  3. Garak

  4. Privacy Meter

  5. Audit AI

  6. ai-exploits

1. NB Defense 

NB Defense is a JupyterLab extension and CLI tool for AI vulnerability management, offered by Protect AI. 

Figure 1: View of NB Defense's contextual guidance (Source: nbdefense.ai)

JupyterLab is the most-used AI model development environment for data science teams across the world. By providing vulnerability management directly at the source of model development, NB Defense allows teams to embed security early in the ML life cycle. It also enables non-security personnel to directly introduce reliable security controls in a straightforward, easy-to-use way. 

  • Main functionality: Detecting vulnerabilities early—from secrets and PII data to common vulnerabilities and exposures (CVEs) and third-party licenses—by providing contextual guidance for data scientists within JupyterLab and automated advanced repository scanning for security operators 

MLSecOps area of focus: Trusted AI, via DevSecOps and vulnerability analysis

2. Adversarial Robustness Toolbox 

The Adversarial Robustness Toolbox (ART) is a Python library for ML defense against adversarial threats, hosted by the Linux AI & Data Foundation.

Figure 2: A Computer Vision adversarial patch with ART (Source: github.com/Trusted-AI/adversarial-robustness-toolbox)

AI can offer many different avenues for adversarial actors, from extracting users’ data to creating deep fakes or spreading misinformation. ART was created with both developers and researchers in mind: the library supports evaluating a wide variety of models and applications developed on all data types and defending them against the most common adversarial threats to AI. 

  • Main functionality: Defends against evasion, poisoning, inference, and extraction adversarial attacks through a wide catalog of pre-built attacks, estimators, defenses, evaluations, and metrics

  • MLSecOps area of focus: Adversarial machine learning, via red and blue teaming

Foolbox and CleverHans are two libraries similar to ART that are worth checking out too. 

3. Garak 

Garak is a Python package for the vulnerability scanning of large language models (LLMs), created by Leon Derczynski. 

Figure 3: A vulnerability scan of ChatGPT by Garak (Source: github.com/leondz/garak)

The new wave of AI adoption has started with the commercialization of LLMs like ChatGPT. LLMs are quickly being adopted by many organizations to unlock business potential, often via third-party integrations. Garak offers the ability to scan all the most popular LLMs, from OpenAI to HuggingFace and LangChain, to ensure they are secure.

  • Main functionality: Provides pre-defined vulnerability scanners for LLMs to probe for hallucinations, misinformation, harmful language, jailbreaks, vulnerability to various types of prompt injection, and more 

  • MLSecOps area of focus: Adversarial machine learning, via vulnerability analysis of LLMs for red teaming

4. Privacy Meter 

Privacy Meter is a Python library to audit the data privacy of ML models, developed by the NUS Data Privacy and Trustworthy Machine Learning Lab.

Figure 4: How to run an attack with Privacy Meter (Source: github.com/privacytrustlab/ml_privacy_meter/)

AI models are trained on large amounts of data. Training data leakage is one of the most common and expensive threats to AI models. Privacy Meter provides a quantitative analysis of the fundamental privacy risks of (almost) any statistical and ML model, collected into ready-to-use reports with extensive information about both the individual and aggregate risks of data records. Privacy scores allow you to easily identify training data records at high risk of being leaked through model parameters or predictions. 

  • Main functionality: Performs state-of-the-art membership inference attacks, customizable via configuration tiles to use a variety of predefined privacy games, algorithms, and signals 

  • MLSecOps area of focus: Trusted AI, via risk assessment—especially as part of the data protection impact assessment process 

5. Audit AI

Audit AI is a Python library for ML bias testing, offered by pymetrics.

Figure 5: Bias analysis against gender discrimination with Audit AI (Source: github.com/pymetrics/audit-ai/)

AI models learn from the patterns provided in training data and may perpetuate biases and discrimination present within it. Audit AI provides ways to measure bias for statistical and ML models via a user-friendly package built on two libraries that data scientists are very familiar with: pandas and sklearn. Data scientists can use the bias results provided by Audit AI to drive changes in the model development pipeline that can mitigate bias. 

  • Main functionality: Provides implementations of bias tests and algorithm auditing techniques for classification and regression tasks, such as Fisher’s z-test and the Bayes factor

  • MLSecOps area of focus: Trusted AI, via manual testing and auditing

6. ai-exploits 

ai-exploits is a collection of exploits and scanning templates for real-world vulnerabilities, maintained by Protect AI. 

Figure 6: Public vulnerabilities listed on Huntr (Source: huntr.com)

Security teams can extend the AI expertise provided by internal SMEs by testing AI applications against the exploits collected in ai-exploits. Built on research performed by Protect AI and vulnerabilities discovered on the Huntr Bug Bounty Platform, this collection of real-world exploits aims to help you protect your current AI systems. ai-exploits also helps you vet third-party providers.

  • Main functionality: Scans for a variety of vulnerabilities via pre-built tools. Each tool is composed of modules to exploit the vulnerability and templates to automatically scan against the vulnerability. Currently only supporting H2O, MLflow, and Ray for remote code execution, local file inclusion, arbitrary file writes, cross-site request forgery, and server-side request forgery.

  • MLSecOps area of focus: Supply chain vulnerability and adversarial machine learning, via vulnerability scanning and red teaming

Boost your AI security with Wiz

For your cloud-native AI applications, you can rely on Wiz’s AI security posture management (AI-SPM) to secure your AI applications. 

Figure 7: The AI Security Dashboard offered as part of Wiz’s AI-SPM

AI-SPM offers full-stack visibility into your AI pipelines by automatically discovering and documenting AI services and technologies to produce an AI-BOM, protecting you against shadow AI. AI-SPM also enforces secure configuration baselines with built-in rules that can detect misconfigurations in AI services as well as in your IaC, and it allows you to proactively discover and remove critical attack paths related to AI models and training data with accurate risk prioritization.

With Wiz, you can rely on state-of-the-art managed infrastructure and services for your AI security, which can help you set up a strong security layer for your AI pipelines—right now. Learn more by visiting the Wiz for AI webpage. If you prefer a live demo, we would love to connect with you.

Develop AI Applications Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo 

Continue reading

Data access governance (DAG) explained

Wiz Experts Team

Data access governance (DAG) is a structured approach to creating and enforcing policies that control access to data. It’s an essential component of an enterprise’s overall data governance strategy.

13 Essential Data Security Best Practices in the Cloud

Cloud data security is the practice of safeguarding sensitive data, intellectual property, and secrets from unauthorized access, tampering, and data breaches. It involves implementing security policies, applying controls, and adopting technologies to secure all data in cloud environments.

Unpacking Data Security Policies

Wiz Experts Team

A data security policy is a document outlining an organization's guidelines, rules, and standards for managing and protecting sensitive data assets.

What is Data Risk Management?

Wiz Experts Team

Data risk management involves detecting, assessing, and remediating critical risks associated with data. We're talking about risks like exposure, misconfigurations, leakage, and a general lack of visibility.

8 Essential Cloud Governance Best Practices

Wiz Experts Team

Cloud governance best practices are guidelines and strategies designed to effectively manage and optimize cloud resources, ensure security, and align cloud operations with business objectives. In this post, we'll the discuss the essential best practices that every organization should consider.