Lateral movement risks in the cloud and how to prevent them – Part 3: from compromised cloud resource to Kubernetes cluster takeover

In this third blog post, we will discuss lateral movement risks from the cloud to Kubernetes. We will explain attacker TTPs, and outline best practices for cloud builders and defenders to help secure their cloud environments and mitigate risk.

7 minutes read

In our previous blog post in this series covering lateral movement in the cloud, we introduced lateral movement techniques from the Kubernetes to the cloud domain and examined how they differ between the major CSPs depending on their default cluster configurations and their integrations with IAM/AAD identities. 

In this third blog post, we will analyze lateral movement in the opposite direction, from the cloud to Kubernetes, and examine how potential attack vectors vary between CSPs. Finally, we will suggest best practices that organizations can utilize to significantly reduce or prevent critical lateral movement risks.

Cloud-to-Kubernetes attacker TTPs

Adversaries in the cloud can leverage several techniques and functionalities to conduct lateral movement attacks from cloud environments to managed Kubernetes clusters. These include—but are not limited to—exploiting IAM cloud keys, kubeconfig files, and container registry images.

1. IAM cloud keys

Wiz data shows that approximately 15% of cloud environments that utilize managed Kubernetes clusters have at least one cloud workload (e.g. VM, serverless function, bucket, web-app) with a cleartext long-term cloud key stored in it associated with an IAM/AAD user with highly privileged K8s permissions. 

Malicious actors with compromised IAM cloud keys for an identity with managed cluster access could easily generate a working kubeconfig file, which is a YAML file containing all the details, namespaces, users, and authentication techniques relating to a specific Kubernetes cluster. They could then abuse this file and the cluster’s name and region to successfully authenticate to the cluster and access its resources. 

However, the scope of their access would of course be contingent on the identity’s permissions, the nature of which varies between CSPs:

  • AWS cloud keys

An attacker that has compromised AWS cloud keys (e.g. user access keys) might be able to generate a kubeconfig file and authenticate to an EKS cluster, depending on the identity with which the keys are associated: 

- The identity that created the cluster is automatically granted system:masters permissions in the cluster’s role-based access control (RBAC) configuration in the EKS control plane. In other words, this identity has full administrative cluster privileges that could lead to total EKS cluster takeover if compromised. 

- An IAM identity other than the cluster’s creator is manually added to the cluster’s aws-auth ConfigMap and granted access to resources based on certain RBAC permissions. 

  • GCP cloud keys

An attacker that has compromised GCP user cloud keys (i.e. OAuth access and refresh tokens generated during the initial authentication) might be able to generate a kubeconfig file and authenticate to a GKE cluster in the tenant, depending on the user’s permissions. 

Similarly, compromised long-term service account private keys may also allow attackers to authenticate to a GKE cluster. 

Once inside the GKE cluster, GCP integrates both the Kubernetes-native RBAC model and IAM to manage cluster authorization. 

Whereas RBAC controls access on the cluster and namespace level, IAM operates on the project level. An identity must therefore have sufficient permissions on at least one level to access specific resources in the cluster. Consequently, a compromised identity with admin IAM privileges on the organization/folder/project level could potentially have full admin privileges in any cluster on that level, unless RBAC restrictions were applied. 

  • Azure cloud keys

Like in GCP, an adversary that has acquired AAD user cloud keys could obtain a kubeconfig file or long-term service principal credentials and abuse their permissions to authenticate to an AKS cluster in the tenant. The impact of such compromise, however, depends on whether a local account or AAD cluster authentication and authorization method is in use.

- Local accounts with Kubernetes RBAC: 

Since the AKS cluster is not integrated with AAD,  users and admins receive a client certificate that is local to the cluster. The client certificate is created with a common name (CN) equal to masterclient and belongs to the system:masters group that is bound to a cluster-admin, cluster-wide role. As a result, even if the compromised AAD identity only has the minimum required permissions to list the cluster user credentials, the attacker would still be able to generate the cluster’s kubeconfig file and authenticate with full AKS cluster-admin access. 

- AAD authentication with Kubernetes RBAC and AAD authentication with Azure RBAC:  

Given these two methods use AAD for authentication, the generated kubeconfig file is empty and any initial API calls to the cluster API server prompt users to log in and authenticate via a browser, significantly reducing the risk of cluster compromise. 

Our data shows that approximately 87% of cloud environments that utilize AKS clusters are using local accounts with Kubernetes RBAC, the least secure authentication and authorization method.

2. Kubeconfig files

If an attacker finds a kubeconfig file for an AKS managed cluster, it may suffice to compromise the cluster regardless of access to AAD credentials. 

As previously mentioned, Azure offers cluster authentication via local accounts or AAD. If a cluster has been configured to require authentication through local accounts, then kubeconfig uses plaintext certificates (client-certificate-data and client-key-data), allowing a malicious actor to authenticate with full cluster-admin access by default. 

On the other hand, if an AKS cluster file has been configured to require authentication through AAD, the kubeconfig file will modify itself and include a client ID and a pair of access and refresh tokens in plaintext once a user has been prompted to log in. This refresh token is valid by default for 90 days, allowing an adversary with access to the kubeconfig file to generate new access tokens and authenticate to the cluster throughout this period. 

Unlike AKS, kubeconfig files for EKS and GKE managed clusters require the use of cloud keys associated with IAM identities in order to authenticate to the cluster, so an attacker hoping to compromise a managed cluster would also require access to the relevant IAM cloud keys.

3. Container registry images

Many containers running in managed Kubernetes clusters use images pulled from a cloud container registry service such as ECR, GCR/GAR (the successor of GCR) and ACR

If an attacker has successfully compromised one of these registries and has the ability to push and overwrite existing images, they could inject a malicious payload with a backdoor into a legitimate Docker image. Then, once a new pod is created with the same container image ID and the now-malicious image is pulled from the compromised registry, the resulting container will execute the malicious payload and install a backdoor for the attacker to gain a foothold inside the cluster.

Recommended best practices

Here are 3 key cloud-to-K8s best practices that any organization should implement in its environment to mitigate the risk of a lateral movement attack:

  1. Avoid storing long-term cloud keys in workloads

    Instead of using long-term cloud keys inside your cloud workloads, consider attaching IAM roles/service accounts/managed identities to these workloads and defining the minimum required permissions for them. This will ensure that only temporary credentials are being used (generated and rotated automatically by the IMDS), reducing the risk of persistence. 

  2. Remove kubeconfig files from publicly exposed workloads

    To apply strict security procedures, be sure to remove any kubeconfig files from publicly exposed cloud workloads such as VMs, serverless functions, and containers. This is very important in AKS (as compromising a kubeconfig file is usually enough to authenticate to the cluster), but less so in EKS/GKE. However, if a cloud workload is being used to authenticate to a managed K8s cluster from within it, it will hold the kubeconfig file and the relevant cloud keys on the disk. 

    Consider configuring your K8s API server endpoint as private—making it accessible only through a VPC/VNet—and ensure that the workload in the VPC/VNet that is used for authentication (and containing the kubeconfig file) has a strictly configured security group that restricts access to specific IP addresses. This will mitigate the risk of workload and kubeconfig compromise. 

  3. Restrict access to container registries

    Make sure to restrict access to your container registries to prevent unauthenticated or untrusted principals from obtaining your container images, which would allow them to delete or overwrite existing images being used by your cluster’s pods. 

    • In ECR, define a strict resource-based policy for each repository, and avoid using wildcards in the “Principal” field. In addition, consider enabling the “Tag immutability” flag for each repository to prevent overwriting existing images with the same tag. 

    • In ACR, the container registry by default accepts connections over the internet from hosts on any network. Consider limiting network access by applying strict firewall rules or a private endpoint connection

    • In GCR, a bucket named “artifacts.[PROJECT-ID].appspot.com” stores the images pushed to the domain “gcr.io”. This registry storage bucket is only created when a Docker image is pushed. Ensure this bucket is not publicly exposed by refraining from turning on the “Public access” setting (or by not granting access to the allUsers and allAuthenticatedUsers principals). In the event you would like to block public access to any storage buckets in an organization/folder/project, consider applying the `storage.publicAccessPrevention` organization policy on the relevant level. 

    • In GAR, each repository is a separate resource and therefore you can apply different IAM policies to each repository using Artifact Registry roles (instead of Cloud Storage roles in GCR) to grant access. There is also a clear separation between repository administrator and repository user roles. Make sure to apply strict IAM policies to each repository and avoid exposing it to anyone on the Internet by curbing the allUsers and allAuthenticatedUsers principals' access.

Summary

In this third blog post, we covered various lateral movement strategies from cloud environments to managed Kubernetes clusters, including the use of compromised cloud keys, misconfigured cloud container registries, and kubeconfig files. We presented our research findings and recommended three best practices to decrease potential attack surfaces in cloud environments, such as adopting secure storage practices for kubeconfig files and limiting access to cloud container registries. 

In the next post in this series, we will delve into the topic of lateral movement tactics from traditional on-prem environments to cloud domains. We will outline common attacker tactics and techniques, and provide additional best practices to enhance organizational security and minimize the impact of potential security breaches. 

This blog post was written by Wiz Research, as part of our ongoing mission to analyze threats to the cloud, build mechanisms that prevent and detect them, and fortify cloud security strategies. 

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management