Bold predictions for Kubernetes in 2019 (Part 1)

Louise | 22 January 2019

Having recovered from having one too many sherries over the Christmas period, we’re now back bright-eyed and bushy tailed, and ready for whatever exciting things 2019 will bring us. KubeCon North America in Seattle granted I, and ~8,000 other attendees, the opportunity to reflect on the current state of Kubernetes and a glimpse into what we can expect to become best practice in the near future.

Now that we’re fully in the swing of 2019 thought we’d offer some commentary about eight predictions we have for the Kubeverse this year.

Kubernetes Hacking - and More Focus on Security

With Kubernetes now leaving the domain of early adopters of cloud native infrastructure and entering the early phases of mainstream adoption for organisations that want that assurance that Kubernetes is a stable product in line to remain the de facto choice for container orchestration for the foreseeable future, we will now see how fit for purpose Kubernetes is to set up and maintain.

Up until now, reports of Kubernetes hacking have mainly focused on organisations running unsecured Kubernetes Dashboard UIs publicly accessible to anyone on the Internet - with the nature of these incidents being almost comical.That was until December gave us CVE-2018-1002105, the first critical security flaw that warranted serious attention and cluster remediation from the tech community as it left every Kubernetes cluster vulnerable to a privilege escalation via its API server. The decentralised architecture of Kubernetes’ control and worker planes means more attack vectors for hackers, and this combined with Kubernetes being trusted with more production workloads in industry will mean we will start to see what holes lie in the architecture.

The Rise of VMs

One of the keys for successful Kubernetes early mainstream adoption is the ability for Kubernetes to be able to adapt and support multiple use cases of production workloads. This includes answering the question of how to ensure there is sufficient isolation between workloads to ensure security compliance. We’ve seen several developments in 2018 to attempt to answer this question with work towards enabling mature mechanisms for running workloads on small virtual machines, rather than containers. Kata Containers and Google’s gVisor provide complementary implementations of sandboxed containers, with their own pros and cons. The up-and-coming implementation AWS’ Firecracker microVM container runtime will also prove to be an exciting development.

The introduction of the Runtime Class in Kubernetes 1.12 allows fine-grain selection of runtime at a pod level and enables a single cluster to run pods with a mixture of container and VM workloads. Once this leaves alpha and there is a level of confidence in the maturity of this API spec, we should start to see some interesting projects and use cases emerge that will push the viability of choosing a VM runtime.

Cluster API

Among the highlights of KubeCon North America 2018 was an introduction to the Cluster API, a work in progress framework that seeks to extend the Kubernetes API and operator paradigm to include management of the Kubernetes cluster itself. Imagine a tooling that sits underneath kubeadm, allowing cluster maintenance to be performed that is declarative, rather than iterative. A good example use case seems to be the Cluster Autoscaler – Instead of having to write and feed your own scripts for the spin up or tear down of a new Kubernetes node, it could instead talk to the Cluster API and build a new node according to the node declarative state we desire. Having received somewhat of an introduction to the community at Kubecon, the first use cases of this API should start to appear as the specification becomes more mature – If anything, it will be exciting to see how it can reduce the technical debt created by needing to maintain infrastructure as code configuration, like Terraform.

Helm 3

In the space of deployable application configurations for Kubernetes, Helm 3 is a much anticipated revamp what is probably the most widely used Kubernetes package management utility. With no indication of a possible release period available so far in January 2019, and there being a number of quirks in Helm 2 that sometimes make it difficult to automate the deployment of applications with a multi-tiered architecture, it will be interesting to see how the final specification of Helm 3 will be received within the automated application deployment ecosystem. With several other choices, such as Ksonnet, available in this space, how successful the launch of Helm 3 is may affect “best practice” for continuous deployment pipelines that target Kubernetes clusters.

As you can see there’s plenty here to (hopefully) look forward to from Kubernetes! Too much for one blog post even, so check back next week for a few more of our predictions.

Thank you for reading

Do you need help with a Cloud Native or Kubernetes implementation? Get in touch and let's work together.

Contact Us

At LiveWyer Labs we innovate through research and development, see what else we've been working on lately.

If you want to stay up to date and be notified when we post new and exciting content, make sure to follow our Linkedin and Medium.