How to leverage generative AI in cloud apps without putting user data at risk

Learn security best practices to deploy generative AI models as part of your multi-tenant cloud applications and avoid putting your customers’ data at risk.

14 minutes read

TL;DR

  1. Given the rapid advancements in generative AI (GenAI) over the past year, many organizations are beginning to develop solutions utilizing GenAI solutions such as large language models (LLMs)

  2. As developers begin to incorporate GenAI models as part of their cloud apps, this raises new security concerns : how do you securely build multi-tenant services that leverage these models?

  3. This blog post sets out to apply the principles of the PEACH tenant isolation framework to analyze the new types of risk that GenAI introduces into cloud applications and give recommendations on how best to secure your customers’ sensitive data

  4. Stay tuned for our next blog post in this new series on AI security, where we’ll delve into our data on cloud customer use of this technology

Introduction

Given the rapid advancements in generative AI (GenAI) over the past year, many organizations are beginning to develop applications with GenAI solutions including large language models (LLMs) like GPT-4, Alpaca, and PaLM. In fact, our data shows that this is currently the fastest growing category in cloud, with half of cloud customers making use of these technologies.

This change has opened up a wide range of new applications perfectly fit for the advantages of the cloud, and cloud service providers have begun to offer practical API services to streamline this development, such as Azure Cognitive Service, GCP’s Vertex AI, and Amazon Bedrock.

However, incorporating GenAI models in cloud applications requires adhering to security best practices, otherwise end users may be at risk of data exposure. Although there has already been much discussion about the potential commercial risks associated with the use of AI services (e.g.  employees leaking proprietary data through the public ChatGPT API), we believe there is still room to address some important security concerns regarding the development of services that incorporate GenAI, such as tenant isolation.

Why focus specifically on isolation? By design, GenAI models can both contain and be enriched by data sourced from multiple end users, which puts multi-tenant services that incorporate these models at a high risk of cross-tenant vulnerabilities. Based on our past experience with such vulnerabilities in cloud-based applications—such as ChaosDB, ExtraReplica, and Hell’s Keychain—we hope to reduce the likelihood of these issues and mitigate their impact before they ever become an industry-wide problem.

Last year we published PEACH, a tenant isolation framework for cloud applications. As we shall demonstrate in this blog post, its principles are just as relevant when deploying cloud-based services that utilize GenAI. The following content has been written from the vantage point of a GenAI consumer in order to help organizations correctly implement isolation in their GenAI-incorporating cloud services, as well as enable end users to better understand the risks associated with improperly isolated services.

We’re looking forward to seeing how cloud customers will leverage generative AI, and this blog post is the first in a series intended to help cloud customers securely use GenAI models.

Stay tuned for our next post where we’ll delve into our data on cloud customer use of generative AI. If you would like to reach out with questions or feedback on this topic, please email us.

Terminology

If you’re already familiar with generative AI models and the attack vectors they can expose to malicious actors, you can skip ahead to the next section.

The who and what of GenAI

For consistency’s sake, throughout this blog post we will be using the following terms to describe the components of a GenAI-incorporating service and the roles that different companies play in their development:

  • Foundation model – a pretrained model (e.g. GPT4) developed by a “producer” (e.g. OpenAI, Google, Amazon, Meta) that is usually deployed as-a-service by an intermediate “supplier” like a CSP (e.g. Azure) to a “consumer” (e.g. Wiz).

  • Finetuned model – an instance of a foundation model that a consumer has modified to make it better suited to a specific task and then implemented as part of a larger service provided by the consumer to its end user customers.

  • Index – data referenced during inference when responding to prompts, such as documents (e.g. indexes in Azure Cognitive Search).

The generative AI supply chain

The attack surface of GenAI models

There are several known attack vectors with which malicious actors can take advantage of a GenAI model’s functionality or influence it to harm its end users.

  • Data poisoning is when an attacker manages to influence a GenAI model during its training or finetuning phases by injecting malicious content into untrusted data collected from an external source and thereby affecting weights and introducing bias. Alternatively, an attacker could achieve a similar result by modifying trusted data (e.g. a company’s internal documentation) or untrusted data (e.g. information retrieved from the Internet) if the service references that data in real-time when responding to prompts (e.g. when using an OpenAI plugin or Azure Cognitive Service index).

  • Hallucination abuse is when an attacker predicts a hallucination of the model such as a non-existent website or software component and preemptively registers it (similar to typo-squatting) in order to deliver malicious content to end users if the model’s response references it and a user follows through on the model’s suggestion.

  • Indirect prompt injection is when an attacker manages to influence a model’s output by injecting an overriding prompt into untrusted data that the model looks up from an external source in real-time, like an attacker-controlled website or advertisement. This consequently controls output presented to an end user and executes malicious code. Additionally, prompt injection can be utilized to target an end-user or consumer by extracting proprietary data from a model (i.e. data leakage) or from its local environment, and in some cases an attacker may even be able to cause the model to execute malicious code on the attacker’s behalf. This is especially dangerous in the case of inadequate sandboxing.

  • Direct prompt injection is when a threat actor with access to the GenAI-incorporating service writes a specially crafted prompt to compromise the consumer, exfiltrate proprietary data, or simply abuse the model for their own purposes. Under certain circumstances, this could also lead to code execution or information disclosure targeted against other end users, as we shall explain below.

An attacker using direct prompt injection to exploit a shared index and gain access to another user’s data

Assessing interface complexity in GenAI usage

Let’s say you run an online store that sells a wide variety of items, and you decide to incorporate a GenAI model as part of your service, making it available to end users through a customer-facing interface. The service is meant to assist customers browsing your store by leveraging their past purchase information. For example, a customer might ask a question like, “What digital camera model is compatible with the memory card I bought last year?" For this to work, the model inference engine needs to have access to customers’ purchase history.

In this scenario, each customer should be able to trust that your service is secure enough to ensure that:

  • The questions they ask, and the model’s responses, won’t be exposed to any other customer.

  • The responses they receive are indeed sourced from the model and have not been tampered with or directly influenced by other customers.

  • Their purchase history won’t be exposed to any other customer—this should obviously apply to your online store as a whole regardless of the new AI feature, and the incorporation of a GenAI model shouldn’t break that assumption.

Each of the assumptions above might be broken as a result of a cross-tenant vulnerability, wherein a malicious end user may compromise the integrity or confidentiality of data belonging to other end users if insufficient security boundaries have been put in place between neighboring tenants. This could occur whether the malicious user has been granted access to the API or has managed to gain illicit access due to insufficient access controls.

Additionally, just like any other multi-tenant service, GenAI-incorporating applications can suffer from traditional API vulnerabilities (such as AttachMe) as well as unintentional cross-tenant data leakage like the caching issue which temporarily affected OpenAI’s ChatGPT earlier this year.

Regardless of how an attacker gains initial access to a consumer’s service environment—be it via direct prompt injection or by exploiting an API vulnerability—they might attempt to escape isolation and target other users. As explained in our PEACH whitepaper, their success in this endeavor would depend entirely on how well the consumer has isolated their customer-facing interfaces. If the attacker is faced with minimal security boundaries or none at all, they may be in a position to access other users’ data (such as prompt and response history, configuration, etc.), read or modify the consumer’s finetuned model, read or modify indexes, or hijack the model’s output to present malicious content to end users.

Potential cross-tenant attacks enabled by isolation escape in a GenAI-incorporating app

The PEACH framework can be useful for building a threat model to better understand how these interfaces might be abused by malicious actors and how they should be secured. The first steps outlined by the framework are (1) mapping customer-facing interfaces and (2) assessing their complexity. In the context of tenant isolation, we define complexity to mean the degree of control granted to a would-be attacker over how their input is processed by the system, with arbitrary code execution being highly complex, and a dropdown field being very simple, for example.

For GenAI-incorporating services, we can identify three such customer-facing interfaces and determine which factors might contribute to their complexity depending on how they are implemented:

Inference (prompt handling)

At worst, prompt input and processing may be compared to queries executed by a database management system (DMBS) in a Database-as-a-Service (DBaaS), or code executed by an interactive computing platform (e.g. Jupyter Notebook). Examples of risky implementations can be observed in any service that executes code based on end-user prompts, or validates code included in prompts or responses. Therefore, if nothing is known about the implementation details, prompt input should be assumed to be a high-complexity interface and isolated accordingly.

However, this complexity depends on how the GenAI-incorporating service has been implemented and how end-user prompts are handled internally. Since models are immutable, unlike a database in a typical DBaaS, the effective complexity of prompt input should be determined by the type of interaction allowed between the GenAI model and other components of the cloud application, as facilitated by APIs made available to the inference engine. Such components may include core GenAI components like an index and prompt history database, alongside traditional components involved in enrichment such as plugin API endpoints and compute workloads running code execution environments.

In other words, the complexity of an inference interface of a model with no access or read-only access to its local environment can be said to be medium-level, and certainly lower than that of a typical DBMS, interactive computing platform, or inference interface of a model with full write access to other components (once more demonstrating the importance of sandboxing).

Regardless, consumers are likely to store each end user’s prompt and response history for future lookup, in which case end users would have certain (indirect) write permissions to whatever database is used for this purpose. If end users can only indirectly read from this database when retrieving their history through an API endpoint, then row-level security may suffice. However, if they can somehow interact with the database directly then this should be considered a high-complexity interface and therefore allocated per tenant. Similarly, if end users can interact with the database indirectly by prompting the model (meaning that the inference engine must have at least read-only access to the database), then the inference interface itself should be considered medium-complexity, as explained above.

A high-complexity inference interface, due to the model having broad access to its environment

Customization

Consumers often make use of indexes to enrich their model responses. Normally, an index will include only consumer-provided data, and therefore can be safely shared between end users. However, if the consumer grants end users any control over the contents of the index, whether directly (by submitting data) or indirectly (by prompting the model), then this could lead to the ingestion of untrusted data which might introduce strong bias by effectively hijacking responses presented to all other end users, or even allow indirect prompt injection. Such customization must therefore be considered a high-complexity interface, in which case indexes must be allocated per tenant. This isolation is also (more obviously) important for preventing data leakage between tenants, where one tenant inadvertently submits sensitive information that the model might leak to other tenants.

If consumers also offer end users the ability to further finetune the consumer’s base model and adapt it to their precise needs (a “user-finetuned” model), the model temporarily transitions from being immutable (“read-only”) to mutable and therefore may be exposed to untrusted data in the process. This may include the introduction of bias or perhaps even the exploitation of vulnerabilities in data scraping, data decoding, or machine learning software utilized as part of the training process. Therefore, training data ingestion should be considered a high-complexity interface, meaning that it must be isolated accordingly by ensuring per-tenant model allocation. In practical terms, each customer should not share any control with other customers over their finetuning process or their customized models.

Exploiting a data scraper vulnerability to maliciously influence a GenAI model during training or finetuning

Each end user may also be allowed to configure certain aspects of model behavior to their liking, which will require storing their configuration choices somewhere. Such data may be stored client-side (e.g. as a browser cookie), in a dedicated server-side database (allocated per end user) or in a single server-side database shared among multiple end users (while using row-level security). In most cases, this interface can be deemed very low-complexity since configuration will mostly involve toggling Booleans or integers. However, more complex configuration options can raise interface complexity and therefore require stronger tenant isolation.

Feedback & share-back

If consumers allow end users to submit feedback on output (e.g. through a star rating or feedback form) to be utilized for future retuning, then a user could submit malicious feedback which might indirectly influence the model and introduce bias. However, this interface can generally be considered very low-complexity and therefore of least concern, unless a feedback form allows the end user to submit free text. In that case, complexity should be considered slightly higher due to the possibility of sanitation bugs (but the impact in this case wouldn’t be on the model itself but rather on the traditional service that handles user feedback, so this scenario is out of scope for this blog post).

Similarly, if consumers utilize end-user prompts and responses to occasionally retrain or finetune their model (known as “share-back”), malicious end users could also abuse this to introduce bias, such as by “encouraging” the model to prefer the content of an attacker-controlled website when responding to prompts on a certain subject. This is nearly equivalent to accepting untrusted data during training or finetuning and should therefore be considered a medium-complexity interface and handled with appropriate care. Regardless, while sourcing data from customers for training purposes (assuming they have granted their approval), great care must be taken to sanitize and anonymize the data, given the inherent risk of cross-tenant information disclosure.

Architecture archetypes

There are several components that should either be shared between multiple tenants or dedicated per tenant (i.e. duplicated and separated via security boundaries) depending on the specific use-case and implementation details of the GenAI-incorporating service.

ComponentSharedDedicated
Foundation modelAlways.Never (this would be overkill—there’s no reason to duplicate per tenant).
Training/finetuning dataNever (proprietary consumer-owned training/finetuning data should not be available to tenants, as it could enable an attacker to abuse the model).[BS1]If training/finetuning on each end user’s data.
Base finetuned modelAlways (assuming the base model has been finetuned exclusively on consumer-provided data).Never (this would be overkill—there’s no reason to duplicate per tenant).
User-finetuned modelNever (since the model has been finetuned on potentially sensitive data provided by an end user).Always.
IndexIf the index doesn’t contain end-user data and end users cannot influence its contents in any way.If the index contains end-user data or if end users can influence its contents.
User configurationIf the model cannot interact with it (but be sure to use row-level security).If the model can directly interact with it.
Prompt and response historyIf the model cannot interact with it (but be sure to use row-level security).If the model can directly interact with it.
API endpoints (configuration, finetuning, indexing, prompt-invoking)Acceptable for the vast majority of use-cases, but API security and application-level tenant isolation must be prioritized (as always).Required for only the most sensitive of use-cases, as this compensates for most API vulnerabilities.
Traditional service componentsAppropriate for lower complexity components (such as a text entry form).Appropriate for higher complexity components (such as a DBMS, file scanner, or code execution environment).

Based on the table above, we can describe three basic architectural archetypes for services incorporating GenAI models: (pure) multi-tenant, (pure) single-tenant, and a hybrid approach:

Multi-tenant

This architecture usually boils down to one index per tenant used in conjunction with a single shared finetuned base model (that has not been customized per tenant), while utilizing shared storage. For all the reasons outlined above, it is important to ensure that the shared model does not have (facilitated) access to a shared prompt history database, customer indexes, or any other traditional component containing data that belongs to multiple tenants. Moreover, any shared databases must have row-level security enabled (or another comparable mechanism) to serve as an additional security boundary.

Note that in this architecture, the tenant isolation of each separate index is dependent on the particular implementation of the cloud service provider, and dedicated indexes may therefore actually be running on shared compute resources (e.g. containers running on a shared VM).

Multi-tenant service incorporating a shared GenAI model

Another multi-tenant option would be to not use an index at all (at the cost of functionality), or to share a single index among all tenants. However, this would require ensuring that the index doesn’t contain any sensitive data (which isn’t suitable for many use-cases) and that tenants cannot directly or indirectly control the index’s content (which would probably make the service less useful to them).

Single-tenant

In this architecture, models only serve one tenant, and if a base model requires further finetuning, this is performed based exclusively on data trusted and controlled by that tenant. This essentially entails duplicating every component per tenant, other than perhaps API endpoints. Each tenant also has their own index running on a dedicated compute instance (e.g. a single virtual machine) and utilizing dedicated storage. Note that there is normally no security-related reason for duplicating the foundation or base models per tenant since they are immutable and do not contain sensitive user data.

In comparison to a multi-tenant architecture, a single-tenant architecture will be less susceptible to cross-tenant vulnerabilities by virtue of maximizing security boundaries between tenants, but it will likely be less efficient, harder to maintain, and more expensive. Therefore, a single-tenant architecture should be reserved for cases where data sensitivity is of the utmost importance, or when you need to compensate for a highly complex interface, such as in services involving code execution.

Hybrid

The architectures proposed above do not have to be mutually exclusive, and there are other possible permutations of sharing or dedicating each of the above components. You may therefore choose to deploy a hybrid system in which some tenants will be better isolated than others depending on their characteristics, such as data volume, data sensitivity, or compute requirements. However, it is important to be transparent about this aspect when communicating with your users.

Recommendations

We recommend conducting an isolation review of your cloud application as part of your threat modelling process by analyzing the risks associated with customer-facing interfaces, determining which security boundaries are in use, and then measuring their strength. When choosing to incorporate a GenAI model into your service, you can use the analysis presented in this blog post to help you assess the risks that the model and its interfaces may introduce into your system.

For the time being, until they have proven to be effective mitigation solutions against cross-tenant vulnerabilities, we suggest treating prompt and response filters as weak security boundaries, that should be implemented in addition to (and not instead of) stronger boundaries such as dedicated user-finetuned models and indexes.

Additionally, the complexity of inference interfaces can be reduced by constraining user input at the cost of functionality through many of the same methods that would apply to any other SaaS, such as ensuring proper input sanitization, preferring a limited API over directly prompting the model, and preferring dropdown fields to textboxes. The inference complexity can also be reduced by limiting output tokens, curbing the model’s read- and write-level access to other components, and restricting index content to trusted internal data (i.e. avoiding Internet lookups unless absolutely necessary). As always, good isolation is about finding the balance between competing factors and compensating for risks introduced by functionality-enabling features.

Finally, as with any other multi-tenant cloud application, hygiene plays a critical role in ensuring tenant isolation. For GenAI-incorporating services, this is especially true in the initial training and finetuning phases: exposing a model to a consumer’s secrets during training can result in those secrets being leaked to a malicious tenant, in spite of any filters or guardrails placed to prevent such a scenario.

See for yourself...

Learn what makes Wiz the platform to enable your cloud security operation

Get a demo 

References

  1. Mitigate the risk of isolation escape with the PEACH framework

  2. Architectural approaches for AI and ML in multitenant solutions

  3. Design patterns for multitenant SaaS applications and Azure Cognitive Search

  4. Securely build generative AI apps & control data with Amazon Bedrock

  5. Threat Modeling AI/ML Systems and Dependencies

  6. OpenAI safety best practices

  7. OWASP AI Security and Privacy Guide

  8. OWASP Top 10 for Large Language Model Applications

  9. Introducing Google’s Secure AI Framework

  10. The AI Attack Surface Map v1.0

  11. The tactics and techniques of adversarial AI

  12. Indirect prompt injection threats

Continue reading

Wiz at Re:Inforce 2023

See what is new with Wiz at Re:Inforce and learn about how Wiz and AWS continue to strengthen a strategic relationship to secure customers’ AWS environments.

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management