Eliminate Critical Risks in the Cloud

Uncover and remediate the critical severity issues in your cloud environments without drowning your team in alerts.

What is Shadow AI?

Shadow AI is the unauthorized use or implementation of AI that is not controlled by, or visible to, an organization’s IT department.

Wiz Experts Team
7 minutes read

What is shadow AI? 

Shadow AI is the unauthorized use or implementation of AI that is not controlled by, or visible to, an organization’s IT department. According to IBM, 30% of IT professionals report that new AI and automation tools have already been adopted by employees at their organizations. But are they doing so within protected governance processes?

AI is a fast-paced technology field based on the principles of open source. Datasets for AI, AI models, and AI products are being released every day for anyone to use—no deep expertise required.

This is especially true for generative AI (GenAI), the application of AI that can create and process content at unprecedented speeds and volumes. Increasingly, people are adopting GenAI in the form of personal assistants, and many have come to rely on the variety of tailored experiences and optimized processes offered by AI. 

Let’s take ChatGPT, one of the most popular AI tools, which grew to 100 million weekly users within a year of launch. OpenAI’s terms and conditions state that conversations can be used for future model training unless users explicitly opt out.

The downside? Users may unknowingly share private and sensitive information that can surface at any time for others to see and, possibly, exploit. In response, CISOs and CIOs all over the world have already begun to draft policies for ChatGPT and enact ChatGPT security policies. 

And they’re on to something. Banning AI use can cause a huge opportunity loss and also result in the proliferation of shadow AI. In order to harness the business potential of AI while mitigating AI security risks, organizations should encourage safe AI adoption. Let’s take a closer look at shadow AI and discuss how to balance AI’s opportunities and risks.

Shadow AI vs. shadow IT

The first step in tackling shadow AI is to understand how it differs from shadow IT and examine how those distinctions require unique security considerations. 

Differences in governance: AI involves data, code, and models 

It’s possible to tackle shadow IT’s code and data risks using reliable mechanisms such as encryption keys, software development life cycle (SDLC) policies, and automated monitoring of access and device/network usage. On the other hand, AI processes involve models that are non-deterministic in nature, which makes them harder to secure. Governance mechanisms for AI are still under active research. 

Differences in adoption: Everyone can use AI

With shadow IT, developers are the only point of failure. IT security risks are dependent on unsafe technology use from a homogeneous and restricted group composed of experts on the technology they are adopting. Alternatively, AI users are mostly unaware of best security practices when they adopt AI technology. This difference in who is adopting shadow AI vs. shadow IT makes the attack surface for shadow AI wider and less well-defined.

Risks of shadow AI

Shadow AI comes with risks that are as far-reaching as its attack surface. Let’s delve deeper into the top three risks: 

1. Data protection

Shadow AI users may unintentionally leak private user data as well as intellectual property when interacting with AI models. AI models can be trained on users’ interactions, such as prompts for large language models, and that user-provided data can become accessible to third parties who haven’t signed NDAs or non-compete agreements. To put it simply, confidential data could end up in the hands of malicious actors who can exploit it in undesirable ways.

Here’s one real-world example: Multiple Samsung employees pasted lines of proprietary code into ChatGPT in an effort to streamline their work. Because ChatGPT can be trained on user input unless they have opted out, there’s a chance that Samsung’s code could be included in future model releases.

2. Information integrity

Users of shadow AI may act on misinformation generated by their interactions with AI models. GenAI models are known to hallucinate information when they’re uncertain about how to answer. One prominent example? Two New York lawyers submitted fictitious case citations generated by ChatGPT, resulting in a $5,000 fine and loss of credibility.

Bias is another pressing issue with AI’s information integrity. GenAI models are trained on data that is often biased, leading to equally biased responses. For instance, when prompted to generate images of housekeepers, Stable Diffusion demonstrates racial and gender bias by almost always generating images of black women.

As we’ve seen, if users rely on the output of AI models without fact-checking responses, the consequences can include financial and reputational hits that are difficult to bounce back from.

3. Regulatory compliance

Shadow AI is not yet protected by auditing and monitoring processes that ensure regulatory standards are met. Around the world, new GDPR regulations related to AI and new AI-specific regulations are being drafted and released, such as the EU AI Act. Organizations doing business in Europe must be ready to comply with these new standards. And future compliance requirements are one of the “known unknowns” of AI security that add to the complexity of the field. 

Failing to meet regulatory compliance standards for AI models poses legal risks as well as risks to brand image: The public’s opinion on the use of AI can change quickly, after all. And when it comes to costs, it’s fair to estimate that due to its complexity and unpredictability, the financial costs of shadow IT will surpass those of shadow IT.

The benefits of mitigating shadow AI

AI doesn’t only pose risks, though: The key is to bring shadow AI into the light. Leveraging artificial intelligence can provide huge benefits. Employees can unlock the potential of AI for improved process efficiency, personal productivity, and customer engagement. This is true for all teams, including security teams and governance, risk, and compliance (GRC) teams. For example, a security analyst could ask a large language model for insights on how to deal with a security incident that’s not covered in the incident response plan. 

Pro tip

Shadow AI can even be a benefit by highlighting places where current GRC policies are failing so that organizations can better evaluate and enhance existing governance processes.

Best practices for mitigating shadow AI 

Every organization has to decide their risk appetite before shaping their plan to address shadow AI. As we’ve seen, it’s all about balance between governance efforts and access to AI’s benefits.

As shown in the graphic above, banning AI is the only solution that ensures no risks are taken. Though no governance efforts are required in this adoption strategy, no benefits are unlocked either. But there are other choices along the adoption spectrum. 

Allowing the adoption of on-prem AI solutions ensures that an organization has full control of security. In this model, governance efforts are reduced since third-party systems are out of scope, but the benefits of AI adoption are also reduced given the time and resource investment necessary to implement on-prem AI solutions. 

Alternatively, opening up AI adoption to everyone without governance in place means that individuals are forced to self-govern to mitigate security risks, which might reduce the speed and agility gain. While putting a high degree of trust in employees’ behavior can work, it does not provide safety guarantees. 

What’s the best bet? Providing incremental AI adoption within agile government processes is the only option that ensures that the benefits of AI can be enjoyed fast but securely.

Actionable best practices

Let’s look at the step-by-step process you can follow to establish smart governance processes that also unlock the benefits of AI:

  1. Leadership, ideally the C-suite, thoroughly reviews the benefits and risks of fast AI adoption for the organization. 

  2. Leadership appoints an ad-hoc committee that has clear responsibilities and priorities for governing AI adoption in the company.

  3. The committee drafts and releases a first iteration of a Responsible AI policy一with processes一which everyone in the organization can refer to when adopting AI. 

This first iteration should maximize simplicity to ensure fast, but safe, AI adoption with the guarantees provided by a centralized strategy. For example, it could simply define areas of data that are permitted or strictly off limits. 

  1. The committee iterates on the Responsible AI policy in an agile way, ensuring communication and visibility across the organization.

Best practices for balancing turnaround and risk

One strategy is to introduce AI solutions based on turnaround and likelihood of risk. To define AI solutions of interest, the committee should solicit feedback from employees through workshops and surveys. 

First, introduce AI solutions of interest that have a high turnaround and come with low risk. These can be on-prem or third-party solutions that do not keep conversation logs, do not have access to queries, and do not use user interactions for model training unless explicit consent is given. Next, start planning for high turnaround AI solutions that have high risk while developing AI solutions that have low risk in the meantime. 

For less sensitive workflows, a good solution is to provide gated API access to existing third-party AI systems that can introduce guarantees for data confidentiality and privacy requirements for both inputs and outputs. For more sensitive workflows, the safest approach is to develop AI solutions where the data lives since there is no risk of transferring data to external systems. 

To complete support for a new AI offering, relevant information needs to be shared in a digital adoption platform that can help gather insights and put walkthroughs, workflows, and contextual help in place to ensure correct usage. 

The mitigation approach for shadow AI described above ensures that employees can start using inherently secure AI solutions right away, with a plan for how to introduce solutions that are more difficult to secure over time. If processes are set correctly, you can move towards an ideal world where AI applications are moved into your portfolio of SaaS asset management as soon as employees discover and share their value. 

Uncover shadow AI with Wiz

Organizations can’t protect themselves from what they don’t know about. To uncover shadow AI, encouraging and supporting transparency within and across teams is the first step. The next step is to set up an automated solution that can detect unauthorized implementation and usage of AI solutions.

Wiz is the first cloud native application protection platform (CNAPP) to offer AI risk mitigation with our AI Security Posture Management (AI-SPM) solution. With AI-SPM, organizations gain full visibility into AI pipelines, can detect AI misconfigurations to enforce secure configuration baselines, and are empowered to proactively remove attack paths to AI models and data. 

Learn more by visiting our Wiz docs (login required), or see for yourself by scheduling a live demo today!

Shine a light on Shadow AI

Learn how Wiz offers visibility into what cloud resources, applications, operating systems, and packages exist in your environments in minutes.

Get a demo 

Continue reading

Top OSS Incident Response Tools

Open-source software (OSS) incident response (IR) tools are publicly available tools enterprises use to effectively manage and respond to numerous security threats.

Identity Security [Cloud Edition]

Wiz Experts Team

Cloud identity security is the practice of safeguarding digital identities and the sensitive cloud infrastructure and data they gatekeep from unauthorized access and misuse.