It goes without saying that Kubernetes has become the de facto container orchestrator and it’s evolving in a way that people are starting to call it the operating system of the cloud. Some say this is already the case and not just for the cloud but also for on-premise/data centers environments.
Over the last couple of years, a race has been going on and has involved all major cloud providers, including AWS, Azure, GCP, IBM, and even DigitalOcean. The main goal for all of them? Run Kubernetes workloads into their own managed Kubernetes service. The winner is yet to be declared and the race itself is tight and fully dominated by the big players in the market.
While most managed Kubernetes services have been around for fewer than three years, one offering was well ahead of this. Given that Kubernetes was originally developed at Google, it’s not a surprise that Google Kubernetes Engine (GKE) is far away from its competitors by at least three years, being released in 2015. Its largest competitors, AKS and EKS, both launched in 2018. Thanks to this, GKE has a great head-start that is still noticeable today in the platform’s maturity and all the features that are supported within the GCP Ecosystem.
As the author of the fantastic book “Cloud Native with Kubernetes” said in one of the chapters, “The smartest thing to do when running Kubernetes workloads is to pay an extra amount of money and let the smartest people handle and manage the entire complexity of Kubernetes”.
Well, that’s not precisely what he said, but in a way it was.
What the author was trying to say was that implementing Kubernetes is tough. Trust me on this one. Not just the implementation part but also the maintenance and achieving stability and reliability of all the running components. As well as the applications running in the different types of Kubernetes objects (Deployments, DeamonSets, etc).
Ok but, what is Kubernetes as a Service?
It’s a method to manage the infrastructure, networking, storage, and even updates of all the Kubernetes systems. It ensures rapid delivery, scalability, and accessibility of it. In the simplest terms, it’s an evolution of Kubernetes technology that ensures easy deployment, optimized operations, and “infinite” scalability of Kubernetes, out of the box.
As a Certified Kubernetes Administrator, I’ve had the opportunity to work on multiple setups of Kubernetes dealing with different flavors and installations/setups, from local Raspberry Pis, On-premises servers, multi-cloud federated clusters, and multi-region cluster. Let me tell you, there are a lot of benefits of using KaaS but as for my experience these ones are my “favorites”:
Deployment of the Kubernetes cluster can be easy once we understand the service delivery ecosystem and data center configuration. However, this can lead to open tunnels for external malicious attacks.
With KaaS, we can have policy-based user management so that users of infrastructure get proper permission to access the environment based on their business needs. Normal Kubernetes implementation on the cloud exposes the API servers to the internet, inviting attackers to break into servers. With KaaS, some vendors enable the best security options to hide the Kubernetes API server and restrict the actual access to it.
Scaling of infrastructure
With KaaS in place, IT infrastructure can scale rapidly. It is possible due to the high-level automation provided with KaaS. This saves a lot of time and bandwidth for the admin team (DevOps teams). Autoscaling in the cloud is one of the best things that ever happen to the IT world!
Major cloud providers are now moving towards Serverless services and Kubernetes is not far behind. In a nutshell, it’s the ability to run serverless compute for containers. It removes the need to provision and manage servers. In Kubernetes lingo: Worker Nodes. It lets you specify and pay for resources per application, and improves security through application isolation by design.
Increased operational efficiency
For me, this is the most important one. Having the entire Kubernetes Operative system managed, updated, and provisioned by the cloud provider. You, as the DevOps engineer, or the person in charge of the Kubernetes clusters, can now rely on built-in automated provisioning, repair, monitoring, and scaling that at the end of the day minimizes infrastructure maintenance of all sorts of types.
As previously mentioned, there are already some sellers well established in the KaaS market, and they are all about competition when it comes to choosing their own Kubernetes managed service.
#1 Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes offering from AWS. It is based on the key building blocks of AWS such as Amazon Elastic Compute Cloud (EC2), Amazon EBS, Amazon Virtual Private Cloud (VPC), and Identity Access Management (IAM). AWS also has an integrated container registry in the form of Amazon Elastic Container Registry (ECR), which provides secure, low latency access to container images.
#2 Azure Kubernetes Service (AKS) is a managed container management platform available in Microsoft Azure. AKS is built on top of Azure VMs, Azure Storage, Virtual Networking, and Azure Monitoring. Azure Container Registry (ACR) may be provisioned in the same resource group as the AKS cluster for private access to container images.
#3 Google Kubernetes Engine (GKE) takes advantage of Google Cloud Platform’s core services, such as Compute Engine, Persistent Disks, VPC, and Stackdriver. Google has made Kubernetes available in on-premises environments and other public cloud platforms through the Anthos service. Anthos is a control plane that runs in GCP but manages the life cycle of clusters launched in hybrid and multi-cloud environments. Google extended GKE with a managed Istio service for service mesh capabilities. It also offers Cloud Run, a serverless platform — based on the Knative open source project — to run containers without launching clusters.
#4 IBM Cloud Kubernetes Service (IKS) is a managed offering to create Kubernetes clusters of computer hosts to deploy and manage containerized applications on IBM Cloud. As a certified provider, IKS provides intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management for modern applications. IBM is one of the few cloud providers to offer a managed Kubernetes service on bare metal. Via its acquisition of Red Hat, IBM offers a choice of community Kubernetes or OpenShift clusters available through IKS.
I wanted to take the time to make a table comparing all this KaaS. However, I remembered the sentence I made earlier about the smart people doing the job/services and us as the final consumers consuming it. I applied this and here is an image shared by someone in my LinkedIn feed called Pavan Belagatti.
A lot of important information is shared in that table. One of the most important ones is the price for having a Cloud Provider manage your Kubernetes components. It’s interesting to see that both AWS and GCP have the same price, and AKS is actually free, at least at the time of this writing. Another important topic is the integrated monitoring. Where AWS does not have an out of the box monitoring/logging solution for EKS, instead, one has to install CloudWatch custom metrics and then pay the price for them. Whereas GKE and AKS have already a pretty decent solution to do this.
Serverless computing is another great future that is being implemented when running and configuring Kubernetes workloads. I had the chance to play around (on production workloads) with Fargates Profiles on EKS, and it’s really an important feature and a revolution itself on how you run your pods in a serverless way. I will explore this in future posts.
As you would expect after all the previous information, there is not just a winner in this KaaS market. Most of all the offerings are vendor locked with their specific cloud provider’s services and infrastructure. But talking about real data, in “The State of the Kubernetes Ecosystem Second Edition” released in mids 2020 by the NewStack, they conducted a survey and there was a specific section on where they talked about KaaS, and it came as a surprise when the final data showed that EKS was now ahead of the GKE, who was the preferred one for the last couple of years.
The adoption of KaaS is real and people are now moving towards that for all the previously mentioned benefits. AWS is now leading but not by far over his main competitor, GKE. Azure service is also getting a lot of attention for all the integrations with existing Azure services.
KaaS is best suitable for pretty much every organization. No matter the size nor the project, it should be the preferred platform to run containerized workloads in any cloud. Adding to the efficiency of Kubernetes clusters, KaaS brings an overall improvement in the entire container infrastructure and ecosystem. Allowing DevOps, SRE, Developers, Operations, and Security Team to focus on other applications/business matters and not Kubernetes maintenance itself.
Use managed Kubernetes if you can
With the run fewer software principles in mind, I would highly recommend that the management of your Kubernetes Clusters operations should be outsourced to any of the cloud providers previously exposed. Installing, maintaining, securing, configuring, upgrading, and making your Kubernetes Cluster reliable is the heavy lifting of a lot of things. So it makes sense for almost all companies not to do it themselves as it is something that doesn’t differentiate your business (95% of the time).
You should use managed Kubernetes if you can. It is the best option for most of the requirements needed in terms of cost, overhead, and quality. If in the end, you do self-host your clusters, don’t underestimate the engineering time involved for both the initial setup and ongoing maintenance and support overhead.
About the author
Graduated with a Bachelor’s degree in Information Technology from Universidad Dr. José Matías Delgado. Andrés has over +5 years of experience as a DevOps Engineer. He’s currently a Senior DevOps Engineer at Applaudo Studios.