Generative AI Security: Risks & Best Practices

Wiz Experts Team
10 minute read
Main takeaways from Generative AI Security:
  • Generative AI (GenAI) security is a branch of cybersecurity that focuses on securing GenAI applications and ecosystems. 

  • Some key types of GenAI security risks include model vulnerabilities, data risks, misuse scenarios, and compliance and governance risks.

  • Frameworks like the OWASP Top 10 for LLM Applications, Gartner’s AI TRiSM, and the NIST AI Risk Management Framework (RMF) are resources that businesses can use to secure their GenAI applications. 

  • Some best practices to protect GenAI applications include implementing zero-trust security, introducing data protection measures, understanding AI compliance obligations, and getting strong incident response plans in order. 

  • A comprehensive AI-SPM tool should be a part of every enterprise’s security stack to tackle GenAI security challenges.

What is generative AI security? 

Generative AI (GenAI) security is an area of enterprise cybersecurity that zeroes in on the risks and threats posed by GenAI applications. To reduce your GenAI attack surface, you need a mix of technical controls, policies, teams, and AI security tools. 

Before we start untangling GenAI security, let's level set on GenAI itself. GenAI is basically any algorithm that's able to create multimedia content (like images, text, videos, and sound), using the data it’s been trained on. ChatGPT, GenAI's poster child, is a prime example of the kind of application we're talking about.

McKinsey says that GenAI’s impact on productivity could add up to $4.4 trillion in value every year. Those are serious numbers, but keep in mind that GenAI provides value only if potent AI security risks are nipped in the bud.

AI's rapid advancement comes a host of new security challenges. The research team at Wiz have been at the forefront of identifying and analyzing these emerging threats. Their research has uncovered several critical vulnerabilities and risks specific to AI systems and infrastructure, including:

These findings highlight the urgent need for enhanced security measures in AI development and deployment, emphasizing the importance of continuous vigilance in this rapidly evolving field.

Figure 1: Wiz: The first CNAPP to provide AI security for OpenAI customers

What are the main risks associated with GenAI?

To understand them better, let's break GenAI security risks down into four broad categories. 

Model vulnerabilities 

GenAI applications are built on large language models (LLMs). LLMs give these applications natural language processing (NLP) capabilities to sound and act more human. While LLMs are pretty much the backbone of GenAI applications, they are rife with security risks: 

  • LLM security risks are often exploited via adversarial attacks, where cybercriminals manipulate input data to mess with the model’s output. 

  • Data poisoning is a common technique used to breach LLM security. Data poisoning attacks involve corrupting the training data of AI and machine learning (ML) models.

  • Another dangerous LLM security risk is model theft. Model theft occurs when threat actors breach unsecured GenAI models to use them for malicious purposes or even steal them. An example? TPUXtract, a recently unveiled attack method that can help criminals steal AI models.

Data-related risks

Data is the key ingredient in pretty much every GenAI use case. Data is what businesses use to train AI models, and it’s what AI models use to make inferences when prompted. A lot of the data used for GenAI applications is sensitive, which basically means that if threat actors take advantage of data-related AI security risks, the repercussions will be severe. 

Sensitive data exposure is perhaps the most potent data-related AI security risk. If businesses fail to anonymize their training data, it can be intercepted or exposed. And if businesses fail to secure their APIs or third-party sharing protocols, it could lead to similar disasters. 

Don’t forget about data breaches, either. If threat actors breach GenAI applications and tools, we’re talking about millions in potential damages.

Misuse scenarios

GenAI is a powerful tool in the hands of responsible businesses. But in the hands of adversaries, it’s a dangerous weapon. GenAI misuse scenarios, which include the generation of malicious content, deepfakes, or biased outputs, impact businesses, individuals, and governments. As GenAI capabilities grow exponentially, threat actors can generate all manner of believable malicious content and wreak havoc. 

Malicious products of GenAI, such as deepfakes, can do more than harm the reputation of individuals and organizations, though. Consider this particularly problematic scenario: criminals using deepfakes to bypass biometric security systems. When threat actors bypass these systems, they can access even deeper vaults of sensitive enterprise data. 

Compliance and governance risks

With every passing year, businesses have more and more AI-related compliance obligations to fulfill at federal, state, and industry levels. When incorporating GenAI into their plans, businesses have to not only tackle numerous new AI compliance regulations but also deal with how existing regulations like GDPR, CCPA, and HIPAA interact with those new regulations. 

Spearheading this influx of new AI compliance regulations and frameworks is the EU AI Act. Some regulations (like the EU AI Act) are mandatory, while others are more like guidelines—which is why organizations need to pay close attention to untangle the web of AI compliance. 

What are some frameworks and principles that can help secure GenAI?

Tackling AI security risks can overwhelm even the most resilient and well-equipped businesses. Here are some frameworks and resources that can help secure your GenAI ecosystem. (Keep in mind that the following list is useful for businesses of all sizes and types!):

  • OWASP Top 10 for LLM Applications: This list from OWASP acknowledges that the proliferation of LLM applications brings numerous AI security risks. It provides 10 major LLM security risks, including training data poisoning and prompt injection, and offers suggestions and strategies for how to avoid them or keep them under control.

  • Gartner’s AI TRiSM: AI TRiSM is a framework designed to help you stay on top of AI security risks and build a strong AI governance posture. It has four main components: explainability / model monitoring, ModelOps, AI application security, and privacy. By using AI TRiSM, you can cultivate trust among customers and peers, fortify GenAI pipelines, and comply with AI laws and regulations.

Figure 2: Gartner’s AI TRiSM framework (Source: Gartner)
  • NIST AI RMF: The NIST AI RMF provides a step-by-step approach to securing the AI lifecycle. The four main steps of the NIST AI framework are Govern, Map, Measure, and Manage. Basically, if you use this framework, you can make sure that GenAI security is at the forefront of everything you do. The NIST AI RMF also weaves in ethical and social considerations, which are crucial aspects of GenAI security.

  • FAIR-AIR Approach Playbook: A product of the FAIR Institute, the FAIR-AIR playbook tackles five attack vectors associated with GenAI, including shadow GenAI, managed LLMs, and active cyberattacks. The playbook also has five main steps, starting with contextualizing GenAI risks and ending with making decisions regarding mitigation.

  • Architectural Risk Analysis of LLM: Published by the Berryville Institute for Machine Learning, this document is a comprehensive look into LLM security risks, with a whopping 81 LLM security risks. It's a great resource for everyone from CISOs to policymakers. And don’t worry about getting lost in this long list: The document also magnifies the top 10 LLM security risks you need to look out for.

  • AWS Generative AI Security Scoping Matrix: This unique security resource from AWS breaks down GenAI security into distinct use cases. The five use cases or “scopes” include consumer apps, enterprise apps, pre-trained models, fine-tuned models, and self-trained models. So no matter what kind of GenAI applications you’re working with, you’ll find specific ways to address AI security risks.

  • MITRE ATLAS: Introduced as a supporting resource to the MITRE ATT&CK framework, MITRE ATLAS is a knowledge base that includes the latest information on attack techniques used against AI applications. It includes 91 attack techniques across 14 different kinds of tactics. Crucially, MITRE ATLAS also suggests detailed mitigation guidelines and strategies for each of these attack types. If you're looking for specific ways to address adversarial AI attacks, MITRE ATLAS is a good bet.

  • Secure AI Framework (SAIF): A Google Cloud initiative, SAIF is a conceptual framework that can help you keep your AI systems out of harm's way. SAIF highlights pressing AI security risks and also includes controls to mitigate them. If you want to understand AI security specific to your organization, consider using SAIF's Risk Self-Assessment Report. 

GenAI security best practices

Now that you're up to speed on the risks posed by your GenAI applications, it's time to get proactive with your security measures. These best practices are organized by priority to help you build a comprehensive GenAI security program.

Prioritize your AI bill of materials (AI-BOM)

Example of an AI-BOM filtered for the Azure AI services

Your AI-BOM is an exhaustive list of all your AI components, including LLMs, training data, tools, and other GenAI assets and resources. By getting your AI-BOM in order, you'll achieve complete visibility of your GenAI infrastructure and dependencies, which is a great starting point for eradicating GenAI and LLM security risks. It'll also help you unveil shadow AI, the root of many AI threats.

Implementation example:

  • Document all AI models in use, including vendor models (e.g., OpenAI, Anthropic), custom models, and embedded models in third-party applications

  • Map data flows to understand where training and inference data comes from and how it's processed

  • Use automated discovery tools to identify undocumented AI systems (shadow AI)

Potential Success metrics:

  • 100% inventory of production GenAI systems

  • Reduce unknown AI assets by 90% within the first two inventory cycles

  • Complete data lineage for all training datasets

Key stakeholders:

  • Security Teams: Lead the inventory process

  • Data Science/ML Teams: Provide information about models and data

  • IT: Assist with infrastructure mapping

  • Business Units: Disclose departmental AI use

Implement zero-trust controls

Since GenAI is susceptible to a never-ending parade of security risks, you have to tighten up access. Zero-trust security introduces pillars like least privilege, continuous authentication, real-time monitoring, role-based access controls, and micro-segmentation to help you sidestep even the most poisonous GenAI security risks.

Implementation example:

  • Apply identity-based access controls to all GenAI endpoints (e.g., require SSO for access to model APIs)

  • Implement context-based API rate limiting (e.g., per user, per endpoint, or per model access level).

  • Set up continuous monitoring of user interactions with models to detect anomalies

  • Segment your GenAI environment from other production systems

Potential Success metrics:

  • 100% of GenAI systems accessible only through authenticated and authorized channels

  • 90%+ reduction in privileged account access to training data

  • No major data leakage incidents with regulatory impact; detection and remediation of minor incidents within 24 hours

  • Complete audit logs for all model interactions

Key stakeholders:

  • CISO Office: Strategy and oversight

  • Security Engineering: Implementation of controls

  • ML Operations: Integration with AI pipelines

  • IAM Team: User access management

Secure your GenAI data

Data is the lifeblood of your GenAI-incorporating applications, so securing it is paramount. Your first move should be mapping where your GenAI data comes from, how it's used, and what storage practices are in play. Other important data security measures include encryption, tokenization, masking, and input sanitization.

Implementation example:

  • Encrypt all training data at rest using AES-256

  • Use regex filtering in combination with machine-learning-based anomaly detection to detect prompt injection.

  • Apply differential privacy techniques to protect sensitive information in training data

  • Create role-based access controls for different datasets based on sensitivity

Potential Success metrics:

  • 100% of sensitive data encrypted or anonymized

  • Zero data leakage incidents from GenAI models

  • Complete validation of all user inputs before processing

  • Comprehensive data protection applied to all stages (training, fine-tuning, inference)

Key stakeholders:

  • Data Security Team: Lead implementation

  • Data Science/ML Teams: Adapt models to work with protected data

  • Privacy Office: Ensure compliance with data protection regulations

  • Development Teams: Implement input validation controls

Untangle your AI compliance requirements

No matter what industry or country you operate in, you'll have a long list of AI compliance obligations to adhere to. Before you start trying to obey these laws, make sure you understand them clearly. Identify AI-specific laws like the EU AI Act and then figure out how existing laws like CCPA and GDPR apply to your GenAI security.

Implementation example:

  • Create a regulatory mapping matrix specific to your GenAI use cases

  • Implement technical controls required by regulations (e.g., GDPR's right to explanation for AI decisions)

  • Develop a compliance calendar for upcoming AI regulations

  • Establish a data sovereignty framework to ensure local processing when required

Potential Success metrics:

  • 100% documentation of applicable regulations for each GenAI use case

  • Zero compliance violations in quarterly reviews

  • Complete impact assessments for high-risk AI applications

  • Successful demonstration of controls during regulatory audits

Key stakeholders:

  • Legal/Compliance: Primary owners for regulatory mapping and documentation

  • Data Science Teams: Responsible for implementing technical controls

  • CISO Office: Accountable for overall compliance strategy

  • Procurement: Ensuring vendor AI systems meet compliance standards

Kickstart GenAI-specific incident response plans

No matter how strong your AI security measures are, you're still going to face incidents. By developing GenAI-specific incident response plans, especially those with a dash of automation to support your incident response teams, you can catch and contain incidents early.

Implementation example:

  • Create playbooks for common GenAI incidents (data leakage, model manipulation, harmful outputs)

  • Implement automated detection and response for known patterns (e.g., automatic shutdown for prompt injection attempts)

  • Establish clear escalation paths for different types of AI incidents

  • Conduct tabletop exercises simulating GenAI security breaches

Success metrics:

  • Containment of critical GenAI incidents within 30 minutes, with full resolution within defined SLAs

  • 100% of incident responders trained on AI-specific scenarios

  • Quarterly testing and updating of response playbooks

  • Post-incident analysis completed for all AI security events

Key stakeholders:

  • Incident Response Team: Plan development and execution

  • AI/ML Operations: Technical response capabilities

  • Communications Team: Managing external communications during incidents

  • Legal: Addressing compliance implications of incidents

Get visibility into security posture with AI security tools

To deal with AI security risks, you need specialized AI security tools. An AI-SPM (Security Posture Management) solution can help tackle your GenAI risks comprehensively by providing visibility, continuous monitoring, and automated remediation.

Potential Success metrics:

  • Security tools deployed to cover all mission-critical GenAI applications, with continuous monitoring for coverage gaps

  • 90%+ of critical vulnerabilities remediated within 30 days

  • Automated detection of model drift and anomalous behavior

  • Complete visibility into AI attack paths and security posture

Key stakeholders:

  • Security Engineering: Tool selection and implementation

  • DevSecOps: Integration into development pipelines

  • ML Operations: Day-to-day management

  • Risk Management: Use of tool data for risk assessments

How Wiz can help you with GenAI security

GenAI security can be a lot to deal with, which is why you need a strong AI-SPM tool with AI-BOM capabilities like Wiz AI-SPM. With Wiz AI-SPM, you get a straightforward way to deal with even the most dangerous AI security risks.

Figure 3: Wiz AI-SPM provides unparalleled visibility into GenAI security risks

So what does Wiz AI-SPM offer? Full-stack visibility into GenAI pipelines? Check. Continuous detection of GenAI risks and misconfigurations? Check. Analyses of AI attack paths? Check. A light on shadow AI? Check. Wiz is the pioneer of AI-SPM, so we’ve always been one step ahead of AI security risks.

Get a demo now to see how Wiz can secure GenAI in your organization.

Accelerate AI Innovation

Securely Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo