AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

The EU Artificial Intelligence Act: A tl;dr

In this post, we’ll bring you up to speed on why the EU put this law in place, what it involves, and what you need to know as an AI developer or vendor, including best practices to simplify compliance.

Wiz Experts Team
8 minutes read

The EU Artificial Intelligence (AI) Act establishes rules for the development, marketing, and use of AI systems within the European Union in order to promote responsible, human-centric AI. It is the world’s first regulation of its kind, ensuring a pioneering role for the EU in creating global standards for AI governance. It focuses on the broader impacts of AI on society and individual rights.

In this post, we’ll bring you up to speed on why the EU put this law in place, what it involves, and what you need to know as an AI developer or vendor, including best practices to simplify compliance. We’ll also touch on what the future holds for AI regulation, and—spoiler alert, there could be many similar laws soon. 

Why did the EU introduce the AI Act?

You’ve probably seen science fiction movies about out-of-control technology. The technology takes over and runs amok, causing all kinds of devastation.

AI can’t do that—at least not yet. But it does already have the potential to do a lot of harm. 

For AI to work properly, it relies on two crucial things: Models are built by developers to deliver the results you want to achieve. Then, developers train the models using data that’s as close as possible to data the model will use in the real world. 

But if the model or data aren’t reliable, or if someone tampers with them, your AI won’t work properly. Data and models that don’t meet the CIA triad—confidentiality, availability, and integrity—could lead to catastrophic real-world consequences:

  • Picture a self-driving car powered by an AI model trained on insufficient or biased data. The car could misinterpret a traffic situation, leading to a serious accident with injuries or fatalities.

  • Or maybe an AI system diagnoses diseases based on medical images. If this model is trained on biased or incomplete data, or if someone tampers with the model, it could lead to misdiagnosis, delayed treatment, or even fatalities.

Of course, a less catastrophic scenario is more likely. Imagine if your sales chatbot starts lying to site visitors. That could harm sales, operations, or even your reputation. (Like if it starts recommending a competitor’s product!)

AI risk also seriously impacts the ROI for AI, driving up costs and driving down revenue.

It’s long been acknowledged that we need safety standards for AI. Now, the EU has decided to take action.

What were the reasons behind the EU AI Act?

The EU AI Act aims to help people creating AI applications use the technology ethically. It keeps people and businesses safe from unauthorized data collection, surveillance, and manipulation, preventing discrimination and ensuring that AI is used transparently. It also aims to minimize “systemic risk,” meaning the potential for widespread and severe impact on society if something goes wrong with an AI model. More benignly, these steps will also increase trust in AI, which could be a very good thing for developers and AI providers.

The Act prevents the use of AI for malicious purposes, such as deepfakes and other AI-generated misinformation. It does this by making disclosure of AI sources mandatory. If you don’t notify users, you could be fined or suffer other negative consequences. It also requires each member state to establish a National Competent Authority that will oversee the local rollout of the EU AI Act.

Most importantly, the EU AI Act classifies AI use into four risk levels (which we’ll discuss later), banning all uses with an “unacceptable risk.” 

What was the background to the EU AI Act?

The following diagram illustrates the timeline leading up to the adoption of the EU AI Act. As you can see, the Act came into effect on a fairly short timeline. 

Implementation timeline

While the EU AI Act has already come into force, businesses have a three-year grace period, starting in August 2024, to ramp up to full compliance.

Adjustments may occur along the road as EU regulators and businesses work to implement this regulation, but most AI systems will probably need to be in compliance by mid-2026.

Here’s an anticipated timeline:

What this means for you is actually pretty simple. Even if you think the EU AI Act doesn’t apply to you, you need to make sure… and time is running out.

What does the EU AI Act include?

The first and most important thing to know about the EU AI Act is that it has extraterritorial reach. 

That means anyone providing AI systems that will be used or affect consumers or businesses inside the EU probably needs to comply.

The EU AI Act defines a number of different types of AI systems, including:

  • General-purpose AI models (GPAI): Large language models and image and video generators

  • Specific-purpose AI models: AI designed for specific tasks like medical diagnosis, autonomous vehicles, or financial risk assessment

  • Embedded AI systems: AI-powered devices like smart home appliances or industrial robots

The EU AI Act’s four risk levels for AI

As we mentioned above, the EU AI act takes a risk-based approach, assigning AI applications one of four standard risk levels:

  • Unacceptable risk: Activities that pose too great a threat and are prohibited outright

  • High risk: Activities that could negatively affect safety or fundamental rights

  • Limited risk: Activities that are not overly risky but still carry transparency requirements (meaning that users must be informed they are interacting with an AI)

  • Minimal risk: Generally benign activities that don’t need to be regulated

The EU AI Act doesn’t regulate “unacceptable risk” applications of AI, and that’s because these are now banned in Europe. This includes real-time facial recognition in public spaces or “social scoring,” which involves classifying individuals or groups for unequal treatment, and real-time biometric identification for law enforcement purposes (sometimes known as predictive policing). 

It also doesn’t regulate “minimal-risk” activities like spam filters or AI-enabled video games. Right now, that includes a majority of the AI applications currently available on the EU market. 

A small section of the EU AI act deals with “limited risk” systems. Developers and deployers of these types of systems must meet transparency obligations. That usually just means letting end users know that they are interacting with AI (for example, chatbots and deepfakes).

The overwhelming majority of the EU AI Act addresses “high-risk” AI systems and the organizations and individuals who provide them. High-risk applications include…

  • Assessing credit eligibility

  • Evaluating applications for health or life insurance or public benefits

  • Analyzing job applications (ATS) or evaluating candidates

Another major category of high-risk AI systems is “product safety components.” This refers to AI systems embedded in products and are crucial to those products’ safety. For example, AI systems embedded in autonomous vehicles or industrial or medical devices.

The EU AI Act’s eight essential rules

Developers and vendors of AI applications are known as “providers” under the EU AI Act. Any legal or natural persons that use AI in a professional way are considered a “user” or “deployer.”

The EU AI Act outlines eight core requirements for high-risk systems:

  • Risk management throughout the high risk AI system’s lifecycle

  • Data governance to verify all training, validation, and testing datasets

  • Technical documentation to demonstrate and assess compliance

  • Record-keeping to determine risk and update risk level throughout the lifecycle

  • Instructions for use so downstream deployers can ensure total compliance across the supply chain

  • AI systems must allow human oversight by users (deployers)

  • AI systems must be designed to achieve accuracy, robustness, and cybersecurity 

  • Quality management to ensure and report on compliance

Failure to meet these requirements could lead to being cut off from the European market as well as steep fines. Fines will likely vary, depending on the company size, from 7.5 million euros or 1.5% of annual turnover to 35 million euros or 7% of annual turnover.

Despite the extra work the EU AI Act creates, it comes with benefits as well. For example, it provides for the creation of regulatory sandboxes, helping you test applications outside of the regulatory framework. 

And getting back to first principles, the EU AI Act aims to make AI less vulnerable, protecting your business, your clients, and the public. It does this by mandating secure AI development practices, regular security assessments, and transparency and accountability in AI systems. But with the complexity of today’s multi-cloud environments, it’s easier said than done.

What are some best practices for compliance with the EU AI Act?

Here’s what you need to do to make sure you’re following the requirements of the EU AI Act:

  • Conduct thorough risk assessments, including mapping your entire environment to flag shadow data and especially shadow AI.

  • Put in place solid data protection measures such as a data security posture management (DSPM) solution to protect any data being used by the AI.

  • Verify transparency and explainability, meaning it should be possible to interpret and understand the results of AI systems.

  • Update and maintain technical documentation as part of a commitment to transparency.

  • Establish effective governance and oversight, including automated compliance checkers that will quickly flag potential problems.

According to a KPMG report, one of the best ways to drastically cut the work involved in testing and documentation is “leveraging automated threat detection, analysis, and intelligence solutions.” They recommend an automated solution to handle “compliance mapping, obligations tracking, and workflow management.”

Those kinds of tools and more can be found as part of a cloud native application protection platform, or CNAPP. That makes finding a CNAPP that works for your organization one of the best decisions you can make when it comes to simplifying EU AI compliance.

The future of AI regulation

Other jurisdictions beyond the EU are likely going to bring in similar laws. In the U.S., for example, states and federal agencies are exploring AI regulation, especially around autonomous vehicles and healthcare. China, Canada, Japan, the UK, and others are also considering regulatory measures. But the good news is that once you’re compliant with the EU AI Act, it will probably be easier to meet other standards.

If you’re already using AI solutions, or moving in that direction, the responsibility of ethical, responsible AI use lies on your shoulders. You need to mitigate potential harms to end users, to your own organization, and to third parties. 

At a time when most tools are scrambling to catch up with the leading edge of AI, there is one platform that already has your back: Wiz. The Wiz platform offers an AI-SPM solution that provides a unified approach to AI security risk and compliance under a single, easy-to-manage umbrella. 

Wiz rolls out quickly and agentlessly to give you

  • Full-stack visibility into AI pipelines

  • Quick detection of AI misconfigurations

  • Protection for sensitive AI training data

Wiz quickly finds and monitors security and compliance across all your AI services and technologies, like Amazon SageMaker, OpenAI, TensorFlow Hub, and more.

The EU AI Act aims to ensure that the nightmare scenario—technology run amok, like in those science fiction movies—never comes true. And Wiz keeps your business secure from today’s biggest risks with comprehensive visibility, risk assessment, and security measures all behind a single pane of glass.

Don’t let risk or regulation stop you from achieving all the benefits that AI offers your business. Get a demo of Wiz to see how easy it is to secure AI across your entire SDLC.

Accelerate AI Innovation, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Get a demo 

Continue reading

What Is Shadow IT? Causes, Risks, and Examples

Wiz Experts Team

Shadow IT is an employee’s unauthorized use of IT services, applications, and resources that aren’t controlled by—or visible to—an organization’s IT department.

What is API Security?

API security encompasses the strategies, procedures, and solutions employed to defend APIs against threats, vulnerabilities, and unauthorized intrusion.

What is Data Classification?

Wiz Experts Team

In this post, we’ll explore some of the challenges that can complicate cloud data classification, along with the benefits that come with this crucial step—and how a DSPM tool can help make the entire process much simpler.