Detect real-time malicious behavior in Kubernetes clusters

Learn why CISOs at the fastest growing companies choose Wiz to secure their Kubernetes workloads.

Kubernetes as a service

Kubernetes as a service (KaaS) is a model in which hyperscalers like AWS, GCP, and Azure allow you to quickly and easily start a Kubernetes cluster and begin deploying workloads on it instantly.

8 minutes read

Kubernetes as a service (KaaS) is a model in which hyperscalers like AWS, GCP, and Azure allow you to quickly and easily start a Kubernetes cluster and begin deploying workloads on it instantly. In this article, we’ll explore different providers and discuss the pros and cons of leveraging KaaS. 

Let’s get started. 

What is Kubernetes as a service?

With Kubernetes as a service, you get all Kubernetes’ features (like running containerized applications, configuration management, and load balancing) without worrying about running and managing the critical components of Kubernetes.

In other words, you don’t have to make sure that Kubernetes services are up and running all the time. Instead, most of the day-2 operations like management, upgrades of the control plane, and patches are provided by your service provider.

Still, there are KaaS components that you have to manage. So when it comes to Kubernetes as a service, who is responsible for what?

When you look at Kubernetes architecture, there are two major classifications: control plane and node group components. The control plane consists of an API server, etcd, controllers, and a scheduler. Node components are the kubelet, kube-proxy, and network plugins that implement the container networking interface (CNI). 

Out of these components, all the control plane components are managed by your Kubernetes provider. Node components can be managed via the service provider or you can manage them yourself, depending on if you want to run your nodes manually. And while patches are provided to you, you will have to apply them and take care of the worker node upgrades and patches.

Next, let’s discuss the advantages and disadvantages of Kubernetes as a service.

Advantages of Kubernetes as a service

Reduction in management and developer time

With KaaS, you spawn a Kubernetes cluster and start deploying your workload instantly. By handing over the responsibility of managing major components of Kubernetes, you save a lot of engineering time and effort that can be spent on product development instead. This is a massive benefit: After all, products are the crux of your business.

Faster time to production

Setting up a Kubernetes cluster from scratch can be a daunting task, and it can take a lot of time to install different components. After installation, you have to make sure that all the components are available and reliable, your data and configurations are backed up, and you also need to continuously scan the whole cluster to make sure it’s secure (which we’ll discuss below). On top of that, you have to take special precautions when dealing with etcd because that’s where all the data is stored. On the other hand, with Kubernetes as a service, you’re able to bring up and run Kubernetes clusters in a few hours without worrying about all of the complexity.

Better security and access control

With so many components, you have to work hard to make sure a self-managed Kubernetes cluster is secure. Tasks like ensuring that only the API server can talk to etcd, making the API server private, keeping all the cluster nodes private, and continuously patching the servers for security risks and access control all take time and energy that are better spent elsewhere. 

Offloading these tasks to a cloud provider makes a lot of sense, especially for small teams. For the cloud provider, these practices are standard, and you’ll get a lot of security features out of the box. With KaaS, you get easier access control instead of implementing measures by yourself (which can introduce errors and risks). Auditing is yet another feature that you get out of the box. 

Scalability

With the power of the cloud, you can scale Kubernetes clusters just by adding new nodes and virtual machines. Yet there are few things that you have to consider: Is your API server and etcd getting more requests than they can handle? In most cases, the answer is yes, and your cloud provider will scale them automatically. Without KaaS, you would need to scale them up manually. Another scalability problem involves the DNS. With a lot of services, you can end up overloading kube-dns. 

Keep in mind that cloud provider quotas can be one bottleneck to unlimited scaling. To tackle this issue, track your needs and raise your quota on an ad-hoc basis. 

Faster upgrades

The Kubernetes community is very active and releases new versions about every four months. There are two types of version upgrades in Kubernetes: a minor release and a major release. With minor releases, there are no changes to the APIs, which means that minor releases are fairly easy for application owners to implement. 

That’s not the case when it comes to major upgrades. With major upgrades, Kubernetes APIs can change, meaning app owners have to spend time and energy on implementation—whether you are using Kubernetes as a service or not. Before triggering any major update, make sure your applications are ready for the upgraded Kubernetes APIs. 

To prepare for major upgrades, back up the whole etcd, upgrade the controllers first, and then upgrade the API servers. Finally, update the node components and the underlying VMs. (In some cases, you may also have to update the CNI plugin, the kubelet, and kube-proxy.) 

With managed Kubernetes, the responsibility of testing the release and taking care of the control plane when you trigger the upgrade lies with the vendor, saving a lot of engineering effort. Simply put, the upgrade process with Kubernetes as a service is far simpler. With KaaS, you don’t have to worry about backups and availability when upgrading.

Lower cost of ownership and predictable cost

With most of the responsibility offloaded to vendors, the total cost of ownership decreases, and the cost of running the Kubernetes cluster is predictable with managed offerings, considering you don’t have to deal with surprises like hardware failures. 

Disadvantages of Kubernetes as a service

Less control compared to self-managed clusters

With KaaS, you have less control over components like API servers, etcd, and controllers. This means you can’t make changes to these components when you want to fine-tune them for performance or scalability—or when you want to patch them for better security. 

Efficiency and cost

When you are running Kubernetes as a service on the cloud, and your worker nodes are also on the cloud, you’re not utilizing the underlying machine to its full capacity because there can be other customers’ virtual machines running on the same hardware. Furthermore, to abstract the underlying hardware and provide you (and other customers) with a view of the VMs, there will be a hypervisor running on top of the machine. All of this requires a lot of resources. That’s why you’ll never be able to get the same performance as you can get from a bare-metal server. With the extra performance efficiency of bare metal, you will be able to save money on infrastructure. 

Bare-metal machines can be customized to boost performance, and they can deliver performance that seems exponentially faster when compared to a cloud VM. Still, the performance tradeoff might be worth it for you when compared to the management overhead. Running bare-metal hypervisors as worker nodes is not a straightforward task, so you have to decide how to balance speed, performance, and cost. 

Dependence on vendors for patches

Software needs patching to keep it secure from vulnerabilities. There are organizations and groups that regularly publish information about newly found vulnerabilities, like the Common Vulnerabilities and Exposures (CVE) List. Whenever there’s a new vulnerability identified related to Kubernetes, you’ll likely have to patch your Kubernetes services. But since your services are managed by vendors, you are now dependent on them for these patches. 

Vendor lock-in

It’s always an option to switch to another cloud provider because containers can run anywhere. However, one problem is the cost of migration. When leveraging a cloud provider, it’s normal to use tools and services native to that cloud provider. If you have to spend a lot of engineering effort migrating from one vendor to another, the process becomes much less appealing, and you might get stuck with one vendor.

Major KaaS providers

GKE 

Google cloud platform was the first vendor to introduce Kubernetes as a service and so it’s the most mature platform. Google Kubernetes Engine (GKE) provides features like deployment, management, and monitoring of your workload out of the box. It also has autoscaling, which is still missing from a lot of providers. Another upside? GKE charges you based on your compute usage. 

Amazon Elastic Kubernetes Service (EKS)

Although GKE came first, Amazon’s Kubernetes service is the most widely used Kubernetes offering. Because AWS already had a lot of market share, many organizations seamlessly adopted EKS. EKS is a very reliable platform and offers most of the native Kubernetes features. 

EKS boasts native integration with all AWS service offerings, which makes it very easy to use. Just like GKE, EKS also charges you based on compute usage. 

Azure Kubernetes Service (AKS)

You’d think that the fastest-growing cloud platform should have a Kubernetes offering, and yes, they do. Beyond providing users with all of Kubernetes’ capabilities, AKS has native integrations for deployment using the Azure stack. AKS also has a monitoring capability that integrates with Azure Monitor, so you can keep an eye on your Kubernetes clusters. As we saw with GKE and EKS, pricing is dependent on compute usage. 

VMware Tanzu

Tanzu is one of VMware’s premium offerings. With Tanzu, you get all the features of Kubernetes, but what stands out is that Tanzu supports running your Kubernetes clusters on multiple cloud platforms, thus reducing the vendor lock-in problem. And, of course, Tanzu integrates very well with VMware native networking services. Tanzu is the most expensive of all the options we’ll discuss, with pricing based on the number of CPUs. 

Red Hat OpenShift

Red Hat is well-known for its security-focused development. So although OpenShift is not a native Kubernetes deployment, it adds a lot of extra features on top of Kubernetes, especially from a security standpoint. It can be deployed on any cloud and has features like built-in image storage, multi-tenancy, and great support for CI/CD. Like Tanzu, pricing is based on the number of CPUs. 

Beyond what we’ve discussed, there are many other vendors that provide Kubernetes as a service, and almost all of them offer all of Kubernetes’ robust features. To decide what works best for you, consider each vendor’s integration with native cloud services, the implementation of access control, the ease of deployment, integration with CI/CD pipelines, and cost.

Conclusion

Kubernetes as a service is a boon for organizations with small DevOps teams because KaaS allows you to run a lot of Kubernetes clusters with minimal management overhead. At the same time, it can be a disaster if you are running huge Kubernetes clusters because of costs and performance slow-downs. If performance and customization is your concern and you have a good-size team, you should go for self-managed deployments. In all other cases, rely on KaaS providers for easy and secure Kubernetes deployments.

One last thing to bear in mind: Even with KaaS, Kubernetes security remains paramount. That's where Wiz comes in. Protect the complete life cycle of your containers, from build to runtime, while having complete visibility of your Kubernetes and cloud environments to prevent potential risks. Use the Wiz Sensor and CDR for a defense-in-depth strategy, with real-time detection of container, host, Kubernetes, and cloud events to quickly identify and react to malicious behavior. Want to see for yourself? Schedule a demo of our industry-leading platform today.

Empower your developers to be more productive, from code to production

Learn why the fastest growing companies choose Wiz to secure containers, Kubernetes, and cloud environments from build-time to real-time.

Get a demo 

Continue reading

Data access governance (DAG) explained

Wiz Experts Team

Data access governance (DAG) is a structured approach to creating and enforcing policies that control access to data. It’s an essential component of an enterprise’s overall data governance strategy.

13 Essential Data Security Best Practices in the Cloud

Cloud data security is the practice of safeguarding sensitive data, intellectual property, and secrets from unauthorized access, tampering, and data breaches. It involves implementing security policies, applying controls, and adopting technologies to secure all data in cloud environments.

Unpacking Data Security Policies

Wiz Experts Team

A data security policy is a document outlining an organization's guidelines, rules, and standards for managing and protecting sensitive data assets.

What is Data Risk Management?

Wiz Experts Team

Data risk management involves detecting, assessing, and remediating critical risks associated with data. We're talking about risks like exposure, misconfigurations, leakage, and a general lack of visibility.

8 Essential Cloud Governance Best Practices

Wiz Experts Team

Cloud governance best practices are guidelines and strategies designed to effectively manage and optimize cloud resources, ensure security, and align cloud operations with business objectives. In this post, we'll the discuss the essential best practices that every organization should consider.