A Quick Refresher on Kubernetes Security Best Practices

14 minute read
Key takeaways from Kubernetes security best practices:
  • Kubernetes security requires proactive risk management. Workloads are dynamic and ephemeral, which makes traditional security approaches ineffective. To counter this, implement security controls at every layer, from container images to runtime monitoring, to minimize exposure.

  • Enforcing security policies prevents misconfigurations and vulnerabilities. You can use Kubernetes-native tools like admission controllers, RBAC, and network policies to control access, validate workloads, and secure network traffic.

  • Integrating security into the development pipeline strengthens protection. Be sure to scan container images for vulnerabilities before deployment, enforce image signing, and use IaC security checks to maintain a secure environment from build to runtime.

Kubernetes security protects your clusters from cyber threats, unauthorized access, and misconfigurations that could expose your workloads. Since businesses rely on Kubernetes to deploy and manage containers at scale, they gain flexibility and automation but also face new security risks. Unlike traditional infrastructure, where security measures were more static, Kubernetes environments are dynamic and thus require a different approach to keep workloads safe.

Without proper security measures, attackers can exploit misconfigured role-based access control (RBAC), exposed API servers, or unpatched container images to compromise your nodes, containers, or the control plane. 

Kubernetes security isn't just about firewalls or access controls—it involves securing the entire container lifecycle, from image scanning and network policies to runtime monitoring and incident response. To strengthen your Kubernetes environment, you’ll need a layered approach that combines proactive defenses, continuous monitoring, and strict access management to minimize risks.

12 Kubernetes security best practices to swear by

Securing Kubernetes isn’t a one-step process—it requires locking down every part of your cluster. 

Kubernetes is made up of different components, including the API server and worker nodes, and each needs its own security measures. If you leave any part exposed, attackers can find a way in.

Here’s 12 best practices to keep your Kubernetes environment secure:

  1. Enable RBAC

  2. Use namespaces properly

  3. Use proper, verified container images

  4. Implement runtime container forensics

  5. Perform continuous upgrades

  6. Practice proper logging

  7. Practice isolation

  8. Secure your Kubernetes Secrets

  9. Enable audit logging

  10. Apply CIS benchmarks

  11. Integrate Kubernetes with a third-party authentication provider

  12. Apply a security context to a pod or container

1. Enable RBAC

RBAC is enabled by default in Kubernetes, but it's easy to grant overly permissive roles when setting up a cluster. 

For example, imagine if someone is trying to test a few features in a local Kubernetes cluster, but they accidentally run those commands in production because they didn’t change the cluster’s context. This could destroy your entire production environment.

You should always follow the minimum permission model for service accounts and users. With Kubernetes, you can do this very easily (up until the resource and action levels):

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 namespace: default
 name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
 resources: ["pods"]
 verbs: ["get", "watch", "list"]

With the role of “pod-reader,” you can perform the actions “get,” “watch,” and “list” on pod resource permissions, but nothing else:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: app1
  name: app1-developer
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # All actions in app tier
- apiGroups: [""] # Core API group for basic objects
  resources: ["pods", "services", "configmaps"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"] # Ingress Rules
  resources: ["ingresses"]
  verbs: ["get", "list", "watch"] # Limited permissions, prevent modifications for security

This allows developers in the 'app1' namespace full access to deployments, replicasets, pods, services, and configmaps. For security reasons, 'ingresses' are read-only, limiting the attack surface, and preventing attackers from maliciously routing traffic, or causing denial of service.

2. Use namespaces properly

You can use Kubernetes namespaces to isolate developers from each other's workloads and give them their own space to explore. This looks somewhat like multi-tenancy from a developer’s perspective.

Pro tip

User namespaces help prevent container escape by mapping container root (UID 0) to an unprivileged UID on the host. This limits the impact of a compromised container, reducing security risks. When enabled, user namespaces can allocate a separate range of UIDs (often 64K) per container to avoid overlap with the host.

Learn more → How user namespaces improve workload security

When you implement namespaces and RBAC properly, you gain granular control over which resources a developer can access in specific environments. This setup easily maps a user to a designated namespace, allowing them to run applications and make changes only within that namespace.

Roles and role binding allow you to control access. The role defines what a user can do, while role binding maps the role to the groups the user belongs to, like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: app1-developer
  namespace: app1
rules:
  # Allow full management of deployments and replicasets
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]


  # Allow full control over pods, services, and configmaps
  - apiGroups: [""]
    resources: ["pods", "services", "configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]


  # Read-only access to ingress (preventing security risks)
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "watch"]


# Remove any wildcard (`*`) permissions that grant unrestricted access

The role above grants developers the ability to manage workloads (deployments, replicasets, pods, services, and configmaps) within the app1 namespace while enforcing the principle of least privilege (PoLP). Developers have read-only access to ingress resources to prevent unintended networking modifications.

3. Use verified container images

Kubernetes lets you run your workloads without the hassle of managing underlying machines, autoscaling, and configuring management. But what if a workload has security flaws? This can cause multiple issues from inside the cluster, so it’s critical to make sure that you get your images from verified sources and always use updated images.

Adding image scanning in your build phase will help with this. Additionally, make sure to deprecate older image versions and always use the latest base images to build your container image. 

You can also leverage tools like Falco and Trivy to scan images in your build phases:

 trivy image –severity HIGH, CRITICAL
my registry.azurecr.io/myapp:v1.0

4. Implement runtime container forensics

Scanning runtime containers is tricky, but most issues happen during runtime and often slip past the image scanning process. Wiz container forensics helps by scanning your containers for these problems and detecting privilege escalations, lateral movement, and signs that an attacker gained persistence in your environment.

Pro tip

Taking a workload snapshot is not always sufficient. Different runtime events leave limited traces on disk such as fileless malware. Furthermore, threat actors attempt to erase any trace on the disk after carrying out malicious activity. Tracking runtime events on containers, nodes, and VMs can therefore enable a comprehensive workload-related investigation.

Learn more → Intro to cloud container forensics

5. Perform continuous upgrades

You can upgrade one node pool at a time with zero downtime and without impacting the entire cluster, which greatly improves workload availability. Continuously upgrading your Kubernetes clusters to the newest version will also patch your cluster to protect it against any new security threat that comes up. 

6. Practice proper logging

Logs help you see what’s happening behind the scenes and track the actions you take. With logs, you can trace any incident back to the events that caused it.

This is key during security breaches since you can track down exactly what an attacker did to exploit your system and block their actions or system access. To do this, simply look at your API server logs, kubelet logs, and other object logs. 

To proactively detect security threats, route Kubernetes audit logs and system events into Falco for real-time analysis using Helm. Falco monitors logs and detects anomalous behavior, such as privilege escalation attempts or unexpected network activity, providing security teams with critical alerts:

helm repo add falcosecurity hhtps://falcosecurity.github.io/charts
helm install falco falcosecurity/falco –namespace falcocreate-namespace

Beyond simple log collection, you can:

  • Monitor API server, kubelet, and object logs to track suspicious activity.

  • Use Falco to generate security alerts when it detects unusual behavior.

  • Integrate log pipelines with SIEM tools (like Splunk, Elastic Security, and AWS GuardDuty) for deeper threat analysis.

You can take logging a step further by making sure you’re not just collecting data but also actively monitoring it for threats. Here are a few ways to make your logs more effective:

How to look at API server logs and events

In general, these logs are available at /var/log/kube-apiserver.log, but this can vary from cloud to cloud. All cloud providers have their own way to access logs and events. In AWS, logs are present via CloudTrail, and in GKE, you can follow these steps to see API server logs. 

How to look at node logs and events

You'll find node logs or kubelet logs in the  /var/log location. To access them, connect directly to the node and use a simple tail command.

Starting in Kubernetes version 1.27, node log query APIs let you view node logs. Below is an example of how to do this:

kubectl get --raw 
"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"

How to look at pods logs and events

Pod logs and events are easy to view, as they are available for a long time via kubectl commands. Here’s an example:

kubectl logs -f pod_name -n namespace_name

7. Practice isolation 

One of the best ways to add an extra layer of security is by isolating Kubernetes worker nodes and the API server from public networks. With Kubernetes, you can achieve this by keeping all components within your private network and limiting external access.

Here’s how to tighten security:

  • Use private networks for worker nodes and the API server.

  • Implement network policies to control pod-to-pod communication.

  • Use bastion hosts or a VPN for secure cluster access.

  • Block east-west traffic by enforcing a default deny policy in production.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata: 
   name: deny-all-ingress
   namespace: <your-namespace>
spec: 
   podSelector: {}
   ingress []

This blocks all traffic by default, reinforcing PoLP and ensuring that only explicitly allowed communication occurs.

8. Secure your Kubernetes Secrets

Kubernetes Secrets include sensitive data like passwords, tokens, and encryption keys that are essential for applications to function properly. If an attacker compromises these secrets, they can access sensitive data, gain unauthorized entry, cause data breaches, and trigger other security incidents. 

These are a few Kubernetes security best practices for secrets:

  • Use external secret managers: Store and manage secrets securely with tools like HashiCorp Vault or AWS Secrets Manager.

  • Encrypt secrets at rest: Enable Kubernetes Encryption at Rest to protect stored secrets.

  • Control access with RBAC: Restrict who can view or modify secrets using Kubernetes RBAC.

  • Avoid hardcoding secrets: Never embed secrets directly in code. Instead, reference them securely from secret objects.

  • Rotate secrets regularly: Change secrets periodically to limit exposure from leaked credentials using tools like kubectl rotate secret or an external manager.

9. Enable audit logging

Kubernetes audit logging records all API requests, which helps you track changes and detect security threats. By default, Kubernetes stores audit logs in files on the API server:

  • On-premises clusters: Kubernetes saves audit logs in /var/log/kubernetes/audit.log allowing administrators to review API activity directly from the server.

  • Cloud providers: AWS uses CloudTrail, while GKE logs audits in Cloud Logging.

Here are some ways you can configure your centralized audit logging:

Enable audit logging

Add these flags to the API server:

kube-apiserver \
   - - audit-policy-file=/etc/kubernetes/audit-policy.yaml \
   - - audit-log-path=/var/log/kubernetes/audit.log

Define an audit policy

Create a audit-policy.yaml file to specify log levels:

apiVersion: audit.k8s.io/v1
kind: Policy
rules: 
Level: Metadata

Send logs to a central system

Use tools like:

  • Fluentd or Filebeat to collect logs

  • Elasticsearch and Kibana for searching and visualization

  • SIEM solutions (like Splunk or AWS GuardDuty) for security analysis

10. Apply CIS benchmarks

Kubernetes hardening strengthens security by enforcing proper controls to prevent vulnerabilities and leaks. Multiple certifications, including PCI DSS, HIPAA, and NIST, require these rules and guidelines. Organizations must maintain these controls at the network, access, and configuration levels, but they typically manage them at the agent or node level.

You can enhance your Kubernetes security by following CIS benchmarks, which provide industry-standard guidelines for securing your cluster. To simplify compliance, Wiz offers a CIS benchmarking solution that automatically assesses your cloud environment—just connect your cloud to Wiz, and it takes care of the rest.

11. Integrate Kubernetes with a third-party authentication provider

Using a third-party authentication provider strengthens your Kubernetes security by centralizing access control and enforcing stricter authentication policies.

Instead of managing credentials within Kubernetes, you can integrate it with an identity provider like OAuth, OpenID Connect (OIDC), or LDAP. This allows you to apply multi-factor authentication, single sign-on, and RBAC at scale. For example, with OIDC, you can connect Kubernetes to an existing identity provider like Google, Okta, or Azure AD, which reduces the risk of exposed credentials while simplifying user management.

To set up OIDC authentication, configure your Kubernetes API server with the --oidc-issuer-url, --oidc-client-id and other relevant flags:

--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=<your-client-id>
--oidc-username-claim=email

You should also ensure that your provider supports token-based authentication and properly validates users before granting access. This integration improves security and simplifies user management across multiple clusters, making it easier to enforce policies and audit access logs.

12. Apply a security context to a pod or container

A security context in Kubernetes controls a container’s privileges and access, which reduces security risks. Always configure it to limit what a container can do. 

For example, setting runAsNonRoot: true prevents containers from running as the root user and reduces the risk of privilege escalation. Similarly, readOnlyRootFilesystem: true blocks unauthorized file modifications, which makes it harder for attackers to exploit vulnerabilities.

apiVersion: v1
kind: Pod
metadata: 
   name: secure-pod
spec: 
   securityContext: 
      runAsNonRoot: true
      fsGroup: 2000
   Containers: 
   -  name: secure-container
       Image: nginx:1.19.1
      securityContext: 
         allowPrivilegeEscalation: false
         readOnlyRootFilesystem: true
         capabilities: 
            drop: 
ALL

This configuration prevents privilege escalation, runs the container as a non-root user, and makes the root filesystem read-only, significantly reducing the attack surface.

If you're using a Docker-based container runtime on a Linux node, enforce user namespaces and drop unnecessary privileges with capabilities (drop: ["ALL"]). Additionally, set allowPrivilegeEscalation: false to prevent containers from gaining higher permissions inside the cluster. These settings will help you secure endpoints and minimize your attack surface without affecting workload performance.

How to secure Kubernetes architecture

Understanding how to enforce Kubernetes security and which components require testing is crucial. Here are some to address to maintain your system’s security:

Control-plane components

The control plane is the brain of Kubernetes and is responsible for cluster management and orchestration.

The control plane consists of a few components, the first of which is etcd, a consistent database that stores all configurations and secrets. The API server can talk to etcd, while every other component talks to the API server for information or updates—no other component can talk to the etcd servers. The next key component is the controllers, which are useful for scheduling a pod or creating disks and nodes in the cloud. 

 Hardening the API server and etcd is critical for preventing unauthorized access and data breaches:

# Secure API Server Configuration
kube-apiserver \
-anonymous-auth=false \
-audit-log-path=/var/1og/kubernetes/audit.log \
-audit-log-maxage=30 \
-authorization-mode=Node, RBAC \
-client-ca-file=/etc/kubernetes/pki/ca.crt \
-enable-admission-plugins=NodeRestriction,PodSecurity \
-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
-tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
-tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
-service-account-key-file=/etc/kubernetes/pki/sa.pub

The API server should also enforce strict authentication, RBAC, and audit logging to track access:

# Secure etcd Configuration
etcd \
-cert-file=/etc/kubernetes/pki/etcd/server.crt \
-key-file=/etc/kubernetes/pki/etcd/server.key \
-client-cert-auth=true \
-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
-auto-tls=false \
-peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
-peer-kev-file=/etc/kubernetes/pki/etcd/peer.kev \
-peer-client-cert-auth=true \
-peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
-data-dir=/var/lib/etcd

You must secure etcd with TLS encryption and client authentication to prevent unauthorized access:

apiVersion: networking.k8s. 10/v1
kind: NetworkPolicy metadata:
name: allow-internal
spec:
podSelector: {}
# Applies to all pods in the namespace
policyTypes:
Ingress ingress:
from:
podSelector: {} # Allow from all pods in the same namespace
namespaceSelector: matchLabels:
purpose: production # And from all pods in namespaces labeled "purpose:

Kubernetes admission controllers enforce security policies before deploying workloads. The PodSecurity admission plug-in applies these policies to restrict pod configurations, preventing unsafe workloads from running.

Worker-node components 

These components include the kubelet, which is useful for launching the workload, making sure it’s running, and seeing whether the containers are dying and need to be restarted. The kube-proxy sets proper Iptable rules to route traffic  to the services. Lastly, container network interface plug-ins connect pods to the network and make traffic routable. 

It’s important to first secure the API server and etcd. If exposed, these two can wreak havoc on your system because they provide direct access to your Kubernetes configurations.

After this, you should ensure that your network-level configurations are in good shape so that no one can access your Kubernetes network. This includes using network policies to control which pods can communicate with each other.

Shared responsibility 

Shared responsibility is an important step in maintaining your whole system’s security. 

Building software is a collaborative effort that involves many individuals from different teams, each of whom handles different aspects of software engineering. Because of this, it’s important for teams to understand their role and responsibilities concerning Kubernetes infrastructure security:

Team Responsibilities Key security actions
Cloud providerPhysical infrastructure security, Kubernetes control plane management, and API server securityEnable and configure audit logging for the EKS cluster
DevOps/platform teamCluster configuration, network policies, RBAC setup, node security, and upgrade managementImplement network policies to control pod-to-pod communication
Security teamVulnerability scanning, audit logging, incident response, and compliance monitoringIntegrate vulnerability scanners into the CI/CD pipeline for container images
DevelopersSecure coding practices, container image security, application-level security, and secret managementUse AWS KMS for encrypting data at rest and enforce TLS for data in transit

Clearly understanding how different teams’ changes to containers, images, networking, and deployment can impact security will help you improve your overall security posture.

Kubernetes security challenges

Kubernetes introduces unique security challenges due to its dynamic and ephemeral nature. Because of this, traditional security methods that rely on static infrastructure no longer apply, which makes it essential to adapt security strategies to Kubernetes environments.

Here are a few to focus on:

Managing security in volatile workloads

Most Kubernetes workloads are volatile—pods can be deleted and replaced at any time. This makes it difficult to apply persistent security patches using traditional methods like security scripts or Ansible scripts, which work well for VMs but not for Kubernetes.

To address this, you should:

  • Apply security patches when designing pod specifications and building container images.

  • Implement security gating in your CI/CD pipeline to enforce security checks before deploying workloads.

  • Store Kubernetes configurations in IaC pipelines with proper security validations to ensure consistency and compliance.

Container security challenges

Containers introduce specific security risks because they use the same kernel as the host system. If an attacker escapes a compromised container, they can then gain access to the underlying node and other running workloads. 

To reduce this risk, isolate workloads using separate namespaces, apply network policies to restrict unnecessary communication between containers, and use seccomp profiles to limit system calls. Additionally, avoid running containers with --privileged mode, as this grants unnecessary access to the host system.

Another challenge is securing container images since a vulnerable or outdated image can expose your entire workload to attacks. Here are some ways to combat this issue:

  • Always scan images for vulnerabilities before deploying them and pull only from trusted registries.

  • Use immutable tags to control updates and prevent unverified deployments.

  • Sign images with tools like Cosign or Docker Content Trust to verify their integrity before running them in your cluster.

Network security challenges

Kubernetes networks are highly dynamic, which makes it difficult to enforce strict security controls. By default, all pods can communicate freely, but this increases the risk of lateral movement if an attacker gains access. 

To reduce this risk, implement NetworkPolicies to restrict traffic between pods based on specific rules, such as namespace isolation or port restrictions: 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy metadata:
name: allow-frontend-to-backend
spec:
podSelector: matchLabels:
app: backend
policyTypes:
Ingress ingress:
from:
podSelector: matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080

You can also encrypt in-transit data with mutual TLS to prevent traffic interception and conduct regular audits of network endpoints to identify potential vulnerabilities. Applying strict firewall rules further minimizes exposure to external threats, creating a more secure environment.

How Wiz helps enforce Kubernetes security best practices

Kubernetes security isn’t just about following best practices—it’s about having the right tools to enforce them effectively. Managing access, securing images, controlling network traffic, and monitoring for threats all require a proactive approach. 

That’s where Wiz comes in.

Wiz’s cloud-native solution simplifies Kubernetes security by scanning containers, hosts, and clusters for vulnerabilities while enforcing critical safeguards. It also detects misconfigurations, excessive permissions, and exposed secrets before they become serious threats. With real-time monitoring and automated risk prioritization, Wiz helps you stay ahead of attacks and maintain a strong security posture.

Ready to take the next step? Download the Kubernetes Security Best Practices Cheat Sheet today for practical insights to protect your workloads.

Detect real-time malicious behavior in Kubernetes clusters

Learn why CISOs at the fastest growing companies choose Wiz to secure their Kubernetes workloads.

Get a demo 

Other security best practices you might be interested in: