Making Sense of Kubernetes Initial Access Vectors Part 1 – Control Plane
Explore Kubernetes control plane access vectors, risks, and security strategies to prevent unauthorized access and protect your clusters from potential threats.
Kubernetes (K8s) is a complex distributed system designed to run containerized workloads in a scalable and manageable way. It’s now the default method for deploying workloads in cloud-native environments. Thanks to its flexibility and extensibility, Kubernetes has proven effective in handling a variety of workloads (e.g., Batch, HPC, GPU-based), which has contributed to its widespread adoption.
Kubernetes security has been racing to keep up with the rapid evolution of the ecosystem. Common threat types for Kubernetes include crypto-mining, resource-jacking, data theft, cloud pivots, IP stealing, and more. However, all these attacks rely on one thing: successful initial access.
Another, less obvious, motivating factor: our 2023 Kubernetes SecurityReport showed that once attackers gain initial access, there’s ample opportunity for lateral movement and privilege escalation within a cluster. That makes securing initial access even more critical.
Understanding the potential access vectors to your clusters and knowing how to detect the initial attack is crucial. Interested? Keep reading.
Taxonomy
There’s a gap in systematized practices for Kubernetes initial access. Frameworks like the MITRE Containers Matrix and Microsoft Threat Matrix for Kubernetes list some techniques for initial access, but don’t dive into deeper analysis or risk prioritization. In this blog series, we introduce a taxonomy of initial access vectors.
The main pillars represent four major access domains: control plane, data plane, cloud access, and CI/CD. Cloud access and CI/CD are outside the scope of this article; here, we’ll focus on Kubernetes itself. These domains are further divided into smaller vectors, which are topped off by the most prominent risks associated with each. For example, misconfiguration is the main risk tied to exposing Kubelet API access. Below, we’ll focus on control plane access, breaking down initial access vectors, associated risks, and suggested protection and detection techniques. In the next post, we’ll dive into data plane access.
Kubernetes API Access
The K8s API server isn’t just the communication hub between control plane components; it’s also the frontend for external user interaction with the cluster. This makes it the primary method for accessing and managing K8s clusters.
Unauthenticated Access
Access to the K8s API is managed through Role-Based Access Control (RBAC). The key concept here is user roles. By default, K8s maps every unauthenticated user to the system:anonymous role, which belongs to the system:unauthenticated group. When this is disabled (as in AKS), unauthenticated users receive a 401 error when attempting to access API endpoints. When it’s enabled (as in EKS and GKE), unauthenticated users are granted the basic permissions of the anonymous role, like retrieving version and health information:
However, this opens the door to potential RBAC misconfigurations. For instance, a lazy cluster admin might temporarily assign excessive permissions to the system:unauthenticated group because a dev team needs access but doesn’t have the proper credentials. Unauthenticated access has been the cause of multiple incidents.
Kubeconfig
TheKubeconfig file defines how kubectl authenticates against the API server. It has three main sections: clusters (cluster IP and certificate authority), users (authentication data), and contexts (cluster/user pairs). We recommend treating the Kubeconfig file as a file containing secrets, especially the users section, as it holds the authentication data.
In EKS and GKE these are the instructions to run local kubectl exec plugins (aws cli, aws-iam-authenticator,gke-gcloud-auth-plugin.exe) that will fetch the necessary cloud credentials such that the credentials are not stored in kubeconfig file. This has two advantages: (1) it reduces the risk of file exposure and (2) moves Authentication/Authorization into identities domain, which is easier to audit, detect threats etc. In AKS, however, the authentication material depends on the cluster Authentication/Authorization method. When the new AKS cluster is created, the user must choose the method, and the default option is “Local accounts with Kubernetes RBAC":
By default, the Kubeconfig file in AKS includes a user token and client certificate data that could be used for successful authentication. With a certificate valid for two years, a malicious actor could have long-term access to the cluster if the certificate isn’t rotated.
Note: Of course, leaky cloud credentials by themselves pose a very serious risk to K8s environments regardless of the cloud type. However, this is out of scope for this article and marked generally as “Cloud access” in the matrix above.
This risk also applies to self-hosted clusters, which offer a broader range of authentication options (i.e. as token file, SSH credentials, special commands etc). Regardless of the setup, treat your Kubeconfig file like a sensitive document. Never check it into public repositories, which remains one of the most common ways credentials are leaked. A simple GitHub search can reveal access credentials in plain text:
Kubectl proxy
Kubectl proxy is a lesser-known way to access the K8s API server, typically used for temporary diagnostics or debugging. Running kubectl proxy –port=8080 creates a temporary proxy server between the localhost and the K8s API server. Any API calls to localhost:8080 will be executed as HTTP requests authorized by the user who ran the command. Attackers can leverage this unauthenticated connection if they have local or network access (even SSRF will do) to the origin station – a developer laptop or a jumper VM:
Fortunately, this is not a common misconfiguration. Shodan reports fewer than 100 endpoints returning status 200 with the relevant Kubernetes header:
Kubelet API Access
The Kubelet is a cluster control plane agent running on each worker node, and by default, it’s only accessible to internal components on the same node. However, it’s possible to expose the Kubelet API externally for debugging purposes. This setting is controlled with --anonymous-auth and --authorization-mode parameters in the .conf file that stores the kubelet configuration. One of the worst misconfigurations is having a –-anonymous-auth=True / --authorization-mode=AlwaysAllow combination on the node leaving the Kubelet API open for anonymous access. Initial access via Kubelet API will not be visible on K8s audit log and requires sensor or VPC flow logs to detect such activity. This is one of the misconfigurations targeted by TeamTNT, but is rare now in production systems and is typically associated with a honeypot clusters.
Management Interfaces
Management interfaces like K8s Dashboard, Kubeflow, Argo Workflows, and others offer additional cluster access. A typical misconfiguration is leaving the dashboard unauthenticated and exposed to the public internet. The risk here depends on the dashboard’s capabilities and permissions.
These misconfigurations were more common several years ago when the defaults were not secure (the most notorious compromise example remains Tesla dashboard exposure from 2017). Nowadays, dashboards must be explicitly installed, with authentication enabled by default. For example, Shodan reports around 4,000 exposed Kubernetes dashboards. However, the default installation mode requires authentication, thus attackers expecting an easy win will face the sign in screen:
In summary: Kubernetes access via control plane
Kubernetes is a complex, distributed system with multiple access vectors. Each one, if left unsecured, can lead to full cluster compromise. In this post, we shared a taxonomy of control plane Kubernetes initial access vectors, aiming to help operators and security professionals better secure their clusters. We’ve also outlined detection and prevention strategies for each vector.
We’ve tried to balance depth and breadth in our coverage. In the next post, we’ll do the same for data plane access vectors.