AI Governance: 85% of Orgs Use AI, but Security Lags

AI governance main takeaways:
  • Your team establishes AI governance through the policies, processes, and controls that guide how it develops, deploys, and monitors AI across your organization, including defining AI ethics.

  • According to Wiz’s 2025 State of AI in the Cloud, 85% of today’s companies use AI solutions, and as use increases, so do risks along the expanded attack surface.

  • Shadow AI breaches cost organizations an estimated $670,000 more than breaches in organizations without shadow AI.

  • Multi-tenant cloud environments increase risks like data leakage and unauthorized model access because multiple organizations share the same underlying infrastructure.

What is AI governance?

AI governance is the policies, processes, and controls that guide how an organization designs, deploys, and monitors AI. It ensures AI is used responsibly, remains compliant, and aligns with business risk priorities. 

In addition to technical safeguards, AI governance includes oversight, accountability, and transparency mechanisms to keep AI systems consistent with company values and stakeholder expectations.

Organizations use effective AI governance to manage potential risks associated with AI technologies—such as data misuse and model bias—while strengthening broader enterprise risk management. As AI use matures, more threats present themselves—for example, shadow AI. Breaches involving shadow AI cost an estimated $670,000 more than those in organizations without shadow AI.

Here are four important reasons to implement AI governance practices:

1. Companies are rapidly adopting AI solutions: According to Wiz’s State of AI in the Cloud report, 85% of today’s companies are using AI. As adoption increases, so do risks across the expanded attack surface.

2. AI systems are becoming more complex and risky: Modern AI systems can operate autonomously and make critical decisions, often leading to unpredictable outcomes. Issues like model drift or unintended bias also create significant risks. The number of AI tools companies use also contributes to this complexity, making visibility even more challenging. 

Tools and AI adoption by organization from the State of AI in the Cloud

3. Regulatory pressure is heating up: Globally, governments and regulatory bodies are introducing new rules to protect consumer rights and promote ethical AI usage. Organizations must stay ahead of these requirements, as noncompliance brings more than hefty fines—it can damage your reputation and erode trust.

4. Organizations must align AI with their values and ethics: Governance isn’t just about avoiding risks. It’s also about making sure AI use matches your company’s values. By embedding ethical considerations into your AI practices, you can respect human rights, meet societal expectations, and strengthen brand trust.

Below, we break down why AI governance has become so crucial for organizations, highlight key principles and regulations shaping this space, and provide actionable steps for building your own governance framework. We also explore the unique challenges of governing AI in the cloud and how AI-SPM can simplify governance for modern enterprises.

AI governance: Key principles, standards, and regulations 

These five core principles, inspired by global standards like the OECD AI Principles, serve as a roadmap for responsible AI governance:

Core principleWhat it involvesHow Wiz AI-SPM helps
Accountability and ownershipClearly defining who’s responsible for AI systems across their lifecycle to reduce the likelihood of oversight or mismanagementMaps AI resources and ownership across your teams for visibility and accountability
Transparency and explainabilityMaking AI decisions understandable—transparency builds trust and keeps you compliant with regulationsProvides detailed AI-BOM insights and lineage tracking so your team can audit, trace, and make informed decisions
Fairness and privacyMitigating biases in your AI models and prioritizing privacy protectionsIdentifies shadow AI and enforces policy controls to protect sensitive data
Security and safetyProtecting AI systems against security vulnerabilities to make sure operations run reliably under expected conditionsDetects misconfigurations, secret exposures, and any attack paths that could affect your AI systems for quick remediation
Sustainable and inclusive developmentAiming to create responsible AI systems that are both environmentally conscious and beneficial to everyoneStrengthens teams with a shared security model to scale AI safely across your organization while preventing waste and tool sprawl

Because these principles are becoming benchmarks for regulators, adopting them isn’t just a best practice—it’s essential. 

Let’s take a closer look at some key points on regulatory AI compliance

Global AI regulations

The following regulations govern organizations in their respective jurisdictions and have become key standards globally. As a result, cloud-based companies and online organizations that interact with international stakeholders face responsibility and liability for their AI practices in each country.

Below are the top global AI regulations:

  • The EU AI Act and GDPR: The EU AI Act uses a risk-based approach, categorizing AI systems by risk level and introducing strict compliance measures for high-risk applications, such as biometric recognition. GDPR imposes strict data privacy protections for AI systems that handle personal data, such as health records or customer profiles.

  • Canada’s Directive on Automated Decision-Making: This directive focuses on the ethical use of AI in public sector decision-making in Canada. It ensures AI systems are transparent, fair, and accountable, with specific attention to managing risks related to algorithmic decisions.

  • China’s Ethical Norms for New Generation Artificial Intelligence: China’s guidelines stress the importance of aligning AI systems with ethical norms, national security, and fairness. The focus is on promoting human-centric, socially responsible AI while addressing risks like bias and misuse.

  • Brazil’s AI Act: Approved by the Brazilian Senate in December 2024, the country's AI bill focuses on human rights, risk management, and transparency with AI. It relies on a risk-based approach to evaluating and protecting AI systems, applying strict rules to high-risk systems that could affect public safety or human rights. 

Pro tip

Use Wiz to map controls to EU AI Act requirements so you can maintain a healthy compliance posture.


AI Security Readiness Report

Identify your AI security gaps and governance opportunities with insights from 100 security leaders.

Industry standards and frameworks

The following frameworks may come from a particular country or governing body, but they have become leading best practices for organizations worldwide: 

  • ISO/IEC standards: Standards like ISO 42001 offer structured methods for managing AI risks across applications and industries. These frameworks help organizations develop consistent risk management strategies that meet global standards, ensuring safer AI integration.

  • NIST AI Risk Management Framework: This US–based framework focuses on managing AI risks across key areas, such as security, transparency, and fairness. It provides a structured way to assess AI's impact on security and bias, which is essential for companies that are developing AI systems for critical sectors like healthcare or finance.

  • The UK AI Standards Hub: The UK is developing a flexible framework to regulate AI through industry collaboration by emphasizing innovation, accountability, and region-specific ethical standards. This approach complements global frameworks, like the OECD AI Principles.

  • The OECD AI Principles: These international principles provide a foundation for responsible AI stewardship and human-centered values based on five pillars: 

    • AI should benefit people and the planet. 

    • It should complement human and democratic values.

    • AI should ensure transparency.

    • These systems should work efficiently and safely.

    • It should be subject to human oversight and accountability.

  • The US Blueprint for an AI Bill of Rights: This framework outlines five principles for ethical AI development and deployment in the US, including protections against bias, data privacy, algorithmic transparency, and user rights to opt out or contest automated decisions. (Note: While the Blueprint no longer guides official federal policy, it serves as a good foundation. The current administration has released its own AI policy roadmap.)

Pro tip

Use Wiz to automatically assess your AI systems against standards like ISO 42001 and the NIST AI Risk Management Framework—the platform supports over 100 frameworks. 

Common pitfalls and challenges of AI governance

A visualization in Wiz’s AI Security Readiness report based on data from the 2024 Pulse Report by Gatepoint Research

AI governance is essential for ensuring safety, compliance, and accountability. But putting it into practice can present several challenges, including the following:

  • Industry skill gaps: Many companies lack the in-house expertise to handle the complexities of AI governance. This gap creates friction, slows implementation of governance practices, and produces blind spots.

  • Limited visibility into AI systems and siloed ownership: Many teams fail to track all the models, datasets, and tools used across their organization. Shadow AI, tool sprawl, and fragmented development make oversight difficult.

  • Incomplete documentation and traceability: Without consistent tracking of model inputs, training data, and deployment history, it becomes hard to explain decisions, reproduce results, and pass audits.

  • Inconsistent AI policy enforcement: AI development often spans multiple teams, including data science, engineering, legal, and security. Aligning these groups around shared governance practices is a common struggle.

  • Rapidly evolving regulations: Laws like the EU AI Act and new state-level rules in the US continue to evolve. Because of this, staying up to date and adapting governance processes requires constant effort.

  • Balancing oversight and innovation: Too much control can slow down progress, but too little can introduce risk. Balancing oversight and innovation remains a core challenge.

Real-world example: Lenovo’s OpenAI-powered chatbot, Lena, was designed to provide customer support on the company’s website. Researchers discovered the bot had an invalid input/output sanitization, a security flaw that opened the door to cross-site scripting vulnerabilities. By injecting malicious code, researchers demonstrated that they could trick the AI into sending a damaging HTML response to users to steal session cookies or expose sensitive data.

Who is responsible for AI governance?

AI governance is a shared responsibility across multiple functions within an organization. While ultimate accountability varies by company size and industry, the following key stakeholders typically play central roles in overseeing AI governance frameworks, policies, and compliance efforts:

  • Chief AI Officer (CAIO) or Chief Technology Officer (CTO): In organizations with mature AI programs, a CAIO may lead the development and execution of AI governance strategies. Where no dedicated AI leadership exists, the CTO often assumes responsibility for establishing technical governance practices, overseeing model development, and managing AI risk.

  • Chief Information Security Officer (CISO): The CISO is responsible for the security aspects of AI governance, including safeguarding training data, securing models from adversarial threats, and enforcing controls around sensitive information used in AI systems.

  • Chief Data Officer (CDO): The CDO ensures that data for AI training and inference is accurate, ethical, and compliant with data privacy regulations, which contributes directly to broader AI governance objectives.

Along with these positions, your organization can develop the following teams:

  • Legal, compliance, and risk teams: These teams ensure AI systems comply with evolving regulatory requirements (such as GDPR or the EU AI Act), manage ethical considerations, and oversee risk frameworks for AI usage.

  • AI governance committees or cross-functional councils: Larger organizations often establish formal governance bodies that bring together leadership from technology, security, legal, data, and compliance teams to oversee AI deployment holistically.

  • Wiz’s AI-SPM admin and DevSecOps teams: For organizations that use Wiz for full visibility governance and security, the admin configures and enforces guardrails around AI workloads in the cloud. They also work with DevSecOps teams to provide continuous visibility into AI-related risks and ensure alignment with compliance requirements, from development to deployment and maintenance.

In smaller organizations, AI governance typically falls under the joint oversight of the CTO and legal and compliance teams. However, in highly regulated industries, like healthcare and financial services, AI governance may extend to board-level oversight due to regulatory scrutiny.

Best practices for AI governance in the cloud

Modern AI governance requires a shift-left approach to proactively secure an AI-powered cloud

Unlike on-premises environments, cloud infrastructure is dynamic, distributed, and often multi-tenant, which creates specific governance challenges. Addressing these considerations ensures AI systems in the cloud are secure, scalable, and responsibly managed while leveraging the full potential of cloud infrastructure.

Below are six best practices for AI governance and why they matter:

1. Assess your current posture and define clear roles

Before you can improve your AI governance, you must build a foundation for healthy governance. Once you’ve defined your current state and roles, you can create a risk-based strategy by adopting solutions like Wiz for prioritized and contextualized risks.

🛠️ Action step: Start by evaluating your AI risks, regulatory compliance gaps, and ethical challenges. This provides a baseline for enhancing your governance practices. You can then use these findings to track progress over time, including model performance, fairness indicators, and data privacy compliance. 

Pro tip

Make sure your evaluation focuses on measurable metrics for easy benchmarking.

2. Practice distributed data management

Cloud environments store and process data across multiple geographic locations, often automatically. This increases the complexity of ensuring compliance with regulations like GDPR or data localization laws.

🛠️ Action step: Implement geo-fencing and data residency controls to ensure sensitive data stays within compliant regions. Tools from cloud providers, such as AWS Macie or Microsoft Purview, can help you manage data residency and classify sensitive data.

3. Safeguard AI assets in multi-tenant environments

In multi-tenant cloud environments, multiple organizations share the same underlying infrastructure. This increases risks like data leakage or model access by unauthorized parties.

🛠️Action step: Use containerization and encryption for AI deployment to isolate workloads. Then, apply role-based access control to ensure only authorized users or systems can interact with sensitive AI assets.

4. Mitigate cloud-specific threats with automated compliance

Misconfigured storage buckets, insecure APIs, and identity sprawl are frequent cloud vulnerabilities that directly impact AI governance by exposing models and data to breaches.

🛠️ Action step: Automate compliance checks with tools like Wiz, Google Cloud Security Command Center, or AWS Config to continuously audit configurations and detect vulnerabilities in real time.

Wiz’s compliance capabilities measure your security against over 100 frameworks

5. Embed responsibility into AI processes

Use automated tools for continuous monitoring, real-time alerts, and compliance audits to address issues proactively—not when it’s already too late.

Pro tip

Tools like Fiddler AI or TensorFlow Fairness Indicators can help teams monitor model performance and fairness in real time. Additionally, platforms like NannyML and WhyLabs offer continuous model monitoring, auditing, and anomaly detection. Educating your teams on governance policies and risks, as well as their roles in ensuring responsible AI use, is also essential.

6. Gain full-stack visibility into AI pipelines

AI pipelines in cloud environments involve multiple stages, from data ingestion and preprocessing to model training, deployment, and monitoring. Ensuring end-to-end visibility is essential for identifying bottlenecks, mitigating risks, and maintaining compliance across the entire lifecycle.

🛠️ Action step: Use tools like Wiz’s AI-SPM to achieve full-stack visibility with the following actions:

  • Mapping dependencies: Wiz’s AI Bill of Materials (AI-BOM) helps your team track and document all data sources, models, and third-party integrations in your pipeline.

  • Real-time monitoring: Continuously tracking AI workflows allows you to detect anomalies, compliance gaps, and vulnerabilities at any stage.

  • Risk prioritization: Automated insights highlight the most critical issues, from data security breaches to model drift or fairness concerns.

Real-life example: How Konverso uses Wiz to lead AI governance

Konverso, a GenAI-powered virtual assistant platform, needed a way to responsibly care for customer data in this emerging market. Its leaders sought a cloud native security solution to replace its legacy tool and provide the cybersecurity, governance, and compliance tools it needed to scale securely. 

The company selected Wiz because of its ability to provide extensive visibility across cloud environments and its risk and context prioritizations. Konverso’s dev team now uses Wiz to find, prioritize, and remediate security vulnerabilities. The company can confidently grow in the AI space, knowing it has unified security tools for top governance practices.

Simplify AI governance with Wiz AI-SPM

Wiz is a cloud native application protection platform (CNAPP) that secures everything you build and run in the cloud, including your AI systems. In particular, our AI-SPM offers specialized security and governance for your AI ecosystem through the following posture management capabilities:

  • AI visibility with an AI-BOM: Use Wiz’s AI-BOM to gain total visibility into your AI ecosystem and pinpoint risks and compliance gaps in your data, models, and dependencies.

  • Risk assessment and prioritization: Leverage automated monitoring and compliance tools to track vulnerabilities like model drift and security risks. These operational data security tools help your teams prioritize risks by their severity so they stay focused on what matters most.

  • Attack path analysis and risk mitigation: Identify potential vulnerabilities with our automated attack path analysis, which allows you to take proactive steps to close security gaps and ensure your AI systems are secure and compliant.

Wiz’s AI security features in action

With centralized AI security dashboards, Wiz’s AI-SPM enables cross-functional collaboration, continuous monitoring, real-time alerts, and prioritized actions, empowering your team to perform rapid remediation. Our industry-leading platform also provides actionable insights into potential threats, compliance gaps, and security posture to support responsible AI governance and proactive security by design.

Want to see Wiz in action? Try a live demo today to see how our platform can help you secure your cloud environment and accelerate your business.