What is the AI Bill of Rights?
The AI Bill of Rights is a framework for developing and using artificial intelligence (AI) technologies in a way that puts people's basic civil rights first. It was rolled out by the White House Office of Science and Technology Policy (OSTP) in 2022 as a response to the widespread surge in automated systems.
In a lot of ways, the AI Bill of Rights is like America’s version of the EU’s AI Act. Along with other recently launched AI legislation and guidance like the AI Act and the NIST AI Framework, it’s part of a push to make sure that AI’s success doesn't come at the cost of citizens’ safety and privacy. However, it’s worth pointing out that the AI Bill of Rights serves as a set of best practices for AI governance in the U.S., whereas the EU AI Act imposes legally binding obligations on AI developers and users.
The AI Bill of Rights is the brainchild of many in and around AI hot zones. Besides OSTP, we’re talking about multinational corporations like Google and Microsoft, academic scholars, private citizens, policymakers, and human rights organizations. The list might be diverse, but everyone involved has the same goal: ensuring the safe, responsible, and democratic use of the world’s most radical technology.
Check out how Gartner emphasizes the importance of responsible AI use: By 2026, one out of two governments around the world will introduce AI-related policies, ethical standards, and information privacy requirements.
State of AI in the Cloud [2025]
AI data security is critical, but staying ahead of emerging threats requires up-to-date insights. Wiz’s State of AI Security Report 2025 reveals how organizations are managing data exposure risks in AI systems, including vulnerabilities in AI-as-a-service providers.
Get the reportWhat automated systems does the AI Bill of Rights apply to?
Everywhere you look, you’ll see AI—it’s permeated almost every aspect of our work and personal lives. The AI Bill of Rights discusses automated systems that may affect the basic rights of citizens or act as a gateway to critical services. This includes everything from electrical power grid controls to AI-based credit scoring software, hiring algorithms to plagiarism detection tools, and surveillance mechanisms to voting systems. That is, it includes any automated system that could potentially interfere with basic rights like equal opportunity, freedom of speech, or data privacy.
Take a hiring algorithm, for example. Biases in those algorithms can potentially cause organizations to choose candidates based on factors unrelated to the job such as gender, race, or age. The May 2024 Federal Trade Commission complaint against hiring service vendor Aon is a real-world example of the kind of unethical or irresponsible use of AI that the AI Bill of Rights wants to get rid of.
What are the key principles of the AI Bill of Rights?
The AI Bill of Rights has five core principles. So even though the scope of the AI Bill of Rights might be broad, there’s a pretty clear way to navigate it. Let’s break it down one principle at a time.
1. Safe and effective systems
Let’s say you’re building an automated system for your organization. The first principle of the AI Bill of Rights states that you need to work with a diverse group of stakeholders, communities, and experts to understand the ins and outs of potential AI security risks, ethical concerns, and other risk factors.
2. Algorithmic discrimination protections
Automated systems can’t function without a core set of instructions. We know these as AI algorithms, which is exactly what the second principle of the AI Bill of Rights zeroes in on. Basically, this principle states that anyone who builds and deploys AI algorithms should try to stay one step ahead of any AI security risks and ethical hurdles. The best way to do this is to engage in proactive protection measures to keep people safe from AI-enabled discrimination and inequity.
3. Data privacy
A Gartner stat to kick things off with the third principle: 42% of survey respondents claim that GenAI-related data privacy is their most worrisome challenge. With respect to data privacy, the AI Bill of Rights says that anyone who designs and develops automated systems must prioritize and respect individuals’ decisions about how their data is managed, stored, collected, deleted, and processed in AI use cases. No exceptions.
4. Notice and explanation
The fourth principle of the AI Bill of Rights is all about transparency and letting people know when an automated system is in place. For someone working on automated systems, this means disclosing to individuals details about the system, describing the role that AI plays in it, and explaining how it might affect people. An important detail in this principle is that this information shouldn’t be hidden in complicated jargon; instead, it should be presented clearly and in layman’s terms.
5. Human alternatives, consideration, and fallback
The fifth principle of the AI Bill of Rights is about ensuring that individuals always have the option of opting out of automated systems and choosing to interact with a human. If you’re wondering when an individual can do this, the AI Bill of Rights says “where appropriate” and as long as they are “reasonable expectations.” This simply means that the fifth principle depends on the context of the automated system. But the basics remain: If an individual needs to communicate with a human instead of a machine, the request should be granted ASAP.
How can following the AI Bill of Rights help organizations?
So far, we’ve looked at what the AI Bill of Rights includes and to whom it applies. In this section, let’s look at how the AI Bill of Rights can benefit you.
1. Increased trust: With AI, the one thing you can be sure of is that the whole world’s watching. Whether you like it or not, your customers, vendors, peers, and competitors will evaluate how well and responsibly you use AI. By following the guiding principles of the AI Bill of Rights, you can build a healthy reputation as an ethical AI innovator and cultivate trust.
2. Stronger compliance: It’s no secret that regulatory compliance can be a massive headache for businesses, especially with criss-crossing data sovereignty requirements and the rise of AI. Keep in mind that AI compliance involves more than just new regulations and that your existing regulations like CCPA, GDPR, and HIPAA will also change as you weave automated systems into your operations. By using the AI Bill of Rights, you can easily navigate this web of AI regulatory obligations.
Real-world example: Want a peek into how devastating AI compliance failures can be? Take a look at this AI privacy violation where an entire city in Italy was fined for breaking rules regarding AI-powered street surveillance initiatives.
3. Improved risk reduction: Automated systems are rife with AI security risks, and organizations need a methodical approach to deal with these. By using principles like those in the AI Bill of Rights or the NIST AI Framework, you can prune down AI risks before they escalate into serious incidents. The benefits? Potentially millions saved via data breach prevention and regulatory penalties, not to mention shielding yourself from reputational damage.
What challenges did the AI Bill of Rights introduce?
The AI Bill of Rights was subject to a fair share of criticism. So after exploring how useful it can be, it’s equally important to look at the other side of the coin. Specifically, what are people worried about?
One of the main headaches for businesses when dealing with a framework is finding out how it affects existing frameworks and obligations. For example, the AI Bill of Rights had quite a bit of crossover with existing standards like Executive Order 13960 and Executive Order 13985.
In specific industries like healthcare, it was crucial to understand how frameworks like the AI Bill of Rights interacted with requirements like HIPAA. Similarly, if you operate in the EU, Africa, Australia, China, or India, you’d have had to consider how the AI Bill of Rights differed from existing and upcoming country-specific laws and policies. The question posed by many is this: Do we need yet another AI guideline on top of what exists? Growing AI security risks and ethical concerns suggest that we do.
Many organizations are also worried about keeping track of their compliance posture, especially in AI-heavy cloud-based environments. Gaining a unified view of AI resources in high-octane cloud environments is tough, which makes AI compliance tougher.
The debate over AI governance
In January 2025, U.S. AI policy underwent a significant shift when President Donald J. Trump signed an executive order titled "Removing Barriers to American Leadership in Artificial Intelligence." This order revoked several AI-related policies from the previous administration, including Executive Order 14110, which was signed by President Joe Biden in 2023 to promote safe, secure, and trustworthy AI development.
The 2025 executive order aimed to eliminate perceived regulatory burdens that could slow AI innovation in the U.S. While it removed specific federal mandates related to AI governance, it did not repeal the AI Bill of Rights, as it is a voluntary framework for AI safety, fairness, and accountability.
This shift in AI policy reflects the ongoing debate in the U.S. over how much regulation AI should have. While some advocate for strict oversight similar to the EU AI Act, others emphasize deregulation to maintain U.S. AI leadership.
How does AI-SPM help with AI compliance?
Ensuring compliance with AI governance frameworks like the AI Bill of Rights (AIBoR) requires a multifaceted approach—one that combines ethical principles, legal considerations, and security best practices. While AI-SPM (AI Security Posture Management) plays a critical role in managing AI-related risks, it is just one component of a broader AI governance strategy.
The AI Bill of Rights outlines five key principles—including fairness, transparency, and privacy—that organizations should follow when developing AI systems. While some of these principles require policy and legal considerations, others—particularly those related to security and data protection—can be addressed with AI-SPM.
AI-SPM solutions help by:
Identifying security risks in AI models, data pipelines, and automated decision-making systems
Preventing AI-related data exposure and ensuring compliance with data protection requirements
Providing visibility into AI assets and monitoring for security misconfigurations
Helping enforce security controls that align with AIBoR’s privacy and safety principles
However, full compliance with AI governance frameworks requires more than just security measures. Organizations must also ensure that AI decision-making processes are explainable, free from bias, and aligned with legal and ethical guidelines.
By integrating AI-SPM with broader AI governance initiatives, businesses can strengthen their AI security posture while also advancing their commitment to responsible AI development.
How Wiz AI-SPM Supports the AI Security Aspects of AI Governance Frameworks
As AI adoption accelerates, organizations must balance innovation with security and governance. AI governance frameworks like the AI Bill of Rights, NIST AI Framework, and emerging global regulations emphasize the need for secure, transparent, and responsible AI. But ensuring compliance with these frameworks isn’t just about policy—it requires a strong AI security foundation to prevent risks such as data exposure, AI model tampering, and algorithmic vulnerabilities.
That’s where Wiz AI-SPM comes in.
How Wiz AI-SPM Strengthens AI Security Governance
AI Bill of Materials (AI-BOM):
Automatically discovers all AI-related assets across cloud environments, including models, data, APIs, and dependencies.
Provides complete visibility into AI pipelines, reducing blind spots and hidden risks that could impact governance.
Comprehensive AI Risk Visibility:
Maps AI risks across your entire stack, helping organizations proactively detect security misconfigurations and attack paths.
Aligns AI security findings with governance frameworks like the AI Bill of Rights and NIST AI Framework.
AI Data Protection & Compliance:
Identifies and prevents sensitive data leaks in AI training and inference pipelines.
Ensures data privacy compliance by monitoring AI data access and encryption policies.
Continuous AI Security Posture Management:
Provides real-time security assessments to track AI-related misconfigurations and policy violations.
Helps organizations maintain an ongoing AI compliance posture rather than relying on one-time audits.
While governance frameworks set the guiding principles for ethical and secure AI, Wiz AI-SPM provides the security tooling needed to enforce those principles. By integrating AI security posture management with broader AI governance initiatives, organizations can build trustworthy AI systems that meet compliance standards while staying resilient against evolving security threats.
Want to see how Wiz AI-SPM can help secure your AI stack? Get a demo today.
Accelerate AI Innovation
Securely Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.