AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

Dark AI Explained

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools.

Wiz Experts Team
7 minutes read

What is dark AI?

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools. 

The positive benefits of AI include enhanced automation, boosted data-driven decision-making, streamlined workflows, and optimized tech and IT costs. To understand dark AI, imagine these capabilities and advantages in the hands of adversaries.

While threat actors use AI for myriad purposes, the primary objective typically involves compromising enterprise IT ecosystems and accessing sensitive data. Dark AI provides cybercriminals with new and unique ways to exploit an enterprise’s IT infrastructure. Using AI, cybercriminals can also accelerate the deployment of more commonly known cyber threats like malware, ransomware, and social engineering attacks like phishing.

According to Gartner, 8 out of 10 senior enterprise risk executives claimed that AI-powered cyberattacks were the top emerging risk in 2024.

According to Gartner, 8 out of 10 senior enterprise risk executives claimed that AI-powered cyberattacks were the top emerging risk in 2024. Sixty-six percent claimed that AI-driven misinformation was 2024’s most potent threat. As AI continues to grow, develop, and disrupt diverse industries, enterprises must be aware of the looming threat of dark AI.

Why is dark AI so dangerous? 

Before delving into how and why dark AI threatens enterprises unlike any other security risk, let’s delve deeper into the business applications of AI.

AI plays a major role in contemporary cybersecurity. Many of today’s leading businesses include AI tools in their cybersecurity stack. According to IBM, in 2023 businesses with AI-powered cybersecurity capabilities resolved data breaches 108 days faster than businesses without. These businesses also saved $1.8 million in data breach costs.

Aside from cybersecurity, businesses also use AI (especially GenAI) for various mission-critical cloud operations. Our research reveals that 70% of enterprises use cloud-based managed AI services such as Azure AI services, Azure OpenAI, Amazon SageMaker, the Azure ML studio, Google Cloud’s Vertex AI, and GCP AI Platform. Forty-two percent of respondents said they self-hosted AI to some degree with models such as BERT, DistilBERT, RoBERTa, T5, Llama, and MPNet. All of these AI models, workflows, and pipelines are vulnerable to dark AI attacks. 

Here’s a closer look at why dark AI should be every enterprise’s top cybersecurity priority: 

  • Enhanced attack frequency: Dark AI tools can deploy malicious code and other kinds of attacks at previously unseen speeds, constantly challenging an enterprise’s defenses. 

  • Higher attack volume: Dark AI tools allow threat actors to dispatch malicious code in bulk and without manual intervention. In the past, this would have been too time-consuming and resource-intensive for most threat actors. 

  • More realistic social engineering: Dark AI can help adversaries automate the deployment of highly realistic and believable emails and communications that trick employees into divulging sensitive data and granting access to enterprise networks.

  • Higher likelihood of prompt injections: With dark AI, adversaries can manipulate and corrupt the training data of various mission-critical GenAI applications. Successful prompt injection attacks hand over power and control of enterprise IT ecosystems to adversaries. 

  • Improved ability to bypass cybersecurity tools: Since dark AI tools can continuously analyze data, self-optimize, and automate attacks, adversaries can constantly exert pressure on an enterprise’s cybersecurity posture. If enterprises don’t continuously improve their fortifications, data breaches become even more of an imminent threat. 

  • Increased use of multimedia fraud: Threat actors can use dark AI to produce malicious and realistic multi-media artifacts. Cyberattackers use these media artifacts to cause reputational damage, spread fake news using deepfakes, and trick enterprise cybersecurity and authentication systems via voice cloning and biometric manipulation. Real-life example: In April 2024, a hacker attempted to trick a LastPass employee by impersonating the company’s CEO on a WhatsApp call. This was done by using an AI-generated audio deepfake, and this attack is just one of many examples where adversaries generate realistic audio (and/or photos) to bypass security mechanisms and trick employees. 

  • Larger number of cybercriminals: Dark AI empowers cybercriminals of all backgrounds, even those without technical acumen. Before the rise of AI, businesses only had to deal with threat actors with immense technical knowledge and resources. Today, with dark AI, anyone with a laptop and malicious intent can cause large-scale damage.

Real-world tools for dark AI attacks

This section highlights five real-world tools that threat actors can leverage for dark AI attacks. 

Remember that not all dark AI tools are inherently malicious. In many cases, threat actors may reverse engineer legitimate tools or repurpose tools for dark AI purposes.

ToolDescription
FraudGPTFraudGPT is a malicious mirror of ChatGPT that’s available through dark web marketplaces and social media platforms like Telegram. FraudGPT can help adversaries write malicious code, compose phishing messages, design hacking tools, create undetectable malware, and identify the most viewed or used websites and services.
AutoGPTAutoGPT is an open-source tool that hackers use for malicious purposes. While not inherently destructive, AutoGPT allows threat actors to establish malicious end goals and train models to self-learn to achieve those goals. With tools like AutoGPT, threat actors can attempt thousands of potentially destructive prompts that involve breaching an enterprise’s defenses, accessing sensitive data, or poisoning GenAI tools and training data.
WormGPTAnother nefarious cousin of ChatGPT, WormGPT has none of the guardrails and security measures that its GPT-3-based architecture originally featured. Hackers trained WormGPT with a vast amount of cyberattack- and hacker-related training data, making it a powerful weapon against unsuspecting enterprises.
PoisonGPTPoisonGPT is a unique dark AI tool because threat actors didn’t create it. Instead, PoisonGPT was an educational initiative conducted by researchers to reveal the vulnerabilities of large language models (LLMs) and the potential repercussions of poisoned LLMs and a compromised AI supply chain. By poisoning the training data of LLMs leveraged by enterprises, governments, and other institutions with PoisonGPT-esque tools and techniques, threat actors can cause unimaginable damage.
FreedomGPTFreedomGPT is an open-source tool that anyone can download and use offline. Because it doesn’t feature any of the guardrails or filters that its more mainstream cousins have, FreedomGPT is unique. Without these filters, threat actors can weaponize FreedomGPT with malicious training data. This makes it easy for threat actors to spread or inject misinformation, biases, dangerous prompts, or explicit content into an enterprise’s IT environment.

Best practices to mitigate dark AI threats

Even though dark AI threats loom large, there are steps you can take to mitigate cyber threats and secure your AI ecosystems. 

Leverage MLSecOps tools

MLSecOps, also known as AISecOps, is a field of cybersecurity that involves securing AI and ML pipelines. While a unified cloud security platform is the ideal solution to battle dark AI and other cyber threats, businesses should also explore augmenting their security stack with MLSecOps tools like NB Defense, Adversarial Robustness Toolbox, Garak, Privacy Meter, and Audit AI.

Figure 1: Bias analysis with Audit AI (Source: GitHub)

Ensure that your DSPM solution includes AI security

For businesses that leverage data security posture management (DSPM), it’s vital to ensure the solution encompasses AI training data. Securing AI training data from dark AI tools keeps AI ecosystems like ChatGPT secure, uncorrupted, and efficient. Businesses that don’t have a DSPM solution must choose a solution that can protect their cloud-based AI training data.

Figure 2: Identifying AI data leakage with Wiz’s DSPM capabilities

Empower developers with self-service AI security tools

To secure, surveil, and manage AI-powered DevOps pipelines, provide devs with powerful security tools and capabilities. With an AI security dashboard or an attack path analyzer to view, maintain, and optimize AI pipelines, businesses don’t have to rely on a centralized security model. 

Optimize tenant architectures for GenAI services

For their GenAI-incorporating services, businesses should carefully choose and intricately configure tenant architecture models. For example, a shared multiple-tenant architecture is ideal for foundational models or base-fine-tuned models. A dedicated tenant architecture works best for user-fine-tuned models. For AI components like indexes, prompt and response histories, and API endpoints, businesses should assess the intricacies of their use case before deciding on tenant architecture. 

(Source: Wiz's State of AI in the Cloud Report)

Shine a light on shadow AI 

Shadow AI refers to any AI tool within a business’s IT environment that the IT or security team is unaware of. Hidden AI tools can be an attack vector for threat actors. From an adversary’s perspective, it’s easier to use dark AI tools to exploit hidden AI infrastructure because they don’t have to bypass any security controls. That’s why it’s critical to focus on identifying and addressing shadow AI. 

Test AI applications in sandbox environments

To ensure that threat actors don’t infect AI-based applications with malicious code, meticulously test those applications in sandbox environments. With testing, enterprises can get a more accurate understanding of what security vulnerabilities, weaknesses, and gaps threat actors might exploit.

Battle dark AI with a unified cloud security solution

There are countless security tools to choose from for protection against dark AI. However, the most comprehensive and efficient way enterprises can keep AI pipelines safe from dark AI threats is by commissioning a unified cloud security solution: A unified platform empowers you to view AI security and AI risk management from a larger cybersecurity perspective and unify your cybersecurity fortifications. 

How Wiz can protect you from dark AI

Wiz AI-SPM (AI Security Posture Management) is a comprehensive security solution designed to help organizations manage and secure their AI environments. It provides full visibility into AI pipelines, identifies misconfigurations, and protects against various AI-related risks.

Wiz AI-SPM can help mitigate the threat of dark AI in several key ways:

  1. Full-Stack Visibility: AI-SPM provides comprehensive visibility into AI pipelines through its AI-BOM (Bill of Materials) capabilities. This allows security teams to:

    • Identify all AI services, technologies, libraries, and SDKs in the environment without using agents.

    • Detect new AI services introduced into the environment immediately.

    • Flag different technologies as approved, unwanted, or unreviewed.

    This visibility is crucial for uncovering shadow AI and potentially malicious AI systems that may be operating without authorization.

  2. Misconfigurations Detection: Wiz AI-SPM helps enforce AI security baselines by identifying misconfigurations in AI services. It provides built-in configuration rules to assess AI services for security issues, such as:

    • SageMaker notebooks with excessive permissions

    • Vertex AI Workbench notebooks with public IP addresses

    By detecting these misconfigurations, organizations can reduce vulnerabilities that could be exploited by dark AI.

  3. Attack Path Analysis: Wiz extends its attack path analysis to AI, assessing risks across vulnerabilities, identities, internet exposures, data, misconfigurations, and secrets. This allows organizations to:

    • Proactively remove critical AI attack paths

    • Understand the full context of risks across cloud and workload

    • Prioritize and address the most critical AI security issues

  4. Data Security: Wiz AI-SPM extends Data Security Posture Management (DSPM) capabilities to AI. This helps:

    • Automatically detect sensitive training data.

    • Ensure the security of AI training data with out-of-the-box DSPM AI controls

    • Identify and remove attack paths that could lead to data leakage or poisoning

  5. AI Security Dashboard: Wiz offers an AI security dashboard that provides a prioritized queue of AI security issues. This helps AI developers and data scientists quickly understand their AI security posture and focus on the most critical risks.

Example of Wiz's AI security dashboard

By implementing these capabilities, Wiz AI-SPM helps organizations maintain a strong security posture for their AI systems, making it much more difficult for dark AI to operate undetected within the environment. The comprehensive visibility, continuous monitoring, and proactive risk mitigation features work together to reduce the attack surface and minimize the potential for unauthorized or malicious AI activities.

Orange builds many Generative AI services using OpenAI. Wiz’s support for Azure OpenAI Service gives us significantly improved visibility into our AI pipelines and allows us to proactively identify and mitigate the risks facing our AI development teams.

Steve Jarrett, Chief AI Officer, Orange
Accelerate AI Innovation, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo 

Continue reading

Data access governance (DAG) explained

Wiz Experts Team

Data access governance (DAG) is a structured approach to creating and enforcing policies that control access to data. It’s an essential component of an enterprise’s overall data governance strategy.

13 Essential Data Security Best Practices in the Cloud

Cloud data security is the practice of safeguarding sensitive data, intellectual property, and secrets from unauthorized access, tampering, and data breaches. It involves implementing security policies, applying controls, and adopting technologies to secure all data in cloud environments.

Unpacking Data Security Policies

Wiz Experts Team

A data security policy is a document outlining an organization's guidelines, rules, and standards for managing and protecting sensitive data assets.

What is Data Risk Management?

Wiz Experts Team

Data risk management involves detecting, assessing, and remediating critical risks associated with data. We're talking about risks like exposure, misconfigurations, leakage, and a general lack of visibility.

8 Essential Cloud Governance Best Practices

Wiz Experts Team

Cloud governance best practices are guidelines and strategies designed to effectively manage and optimize cloud resources, ensure security, and align cloud operations with business objectives. In this post, we'll the discuss the essential best practices that every organization should consider.