Bold predictions for Kubernetes in 2019 (Part 2)

Louise | 25 February 2019

In the last post, we took a look at some things that might happen in the world of cloud-native in 2019. Kubernetes has matured to a level where it is now an assumed means of deploying greenfield and re-architected software architectures in the cloud and on site.

The next frontier for Kubernetes, therefore, is how we integrate existing brownfield legacy architectures in this new world – How do we leverage use cases and software architectures which don’t fit so neatly into our existing Kubernetes clusters?

This will be the subject of this blog post, and we’ll take a look at some of the developments in these areas and our predictions for where these will get to in 2019.

Kube on The Edge

This model of deployment seems very interesting - Where applications are running which require low latency and high availability, one solution is to run a distributed architecture that is composed of multiple “on-site” clusters. If an application generates a lot of data at different sites, instead of sending this data to the central cluster to be processed, we can process this data on a smaller cluster, closer to the point of origin, and send results to a central point.

There seems to be a growing number of companies which consider this a viable use case for Kubernetes. Since September 2018, we have had a Kubernetes IoT Edge working group, cross-collaboration from the networking and multicluster SIGs, that works to promote and enforce the usability of Kubernetes in IoT and Edge environments.

Storage Solutions

Storage management has always been a bit of a sticky bit in the world of Kubernetes. It’s easy to create persistent volumes for our containers, even on the cloud – Kubernetes’ cloud controller manager can provision and delete block storage on whatever backing infrastructure we’re using, such as AWS, GCP or vSphere – The controller manager just needs to be know how to “talk” to the underlying infrastructure. But what happens when we need to manage where our workload is on the cluster? Kubernetes also provides us new challenges in how we manage the data itself – Backing up and restoring data on PVs, and so forth.

The graduation of the container storage interface to GA gives a stable foundation for the development of drivers that provide support for volume types that don’t have built-in support in Kube itself. While individual companies are writing drivers for their own volume types (which differing levels of support for features like replication, snapshotting or data locality), these tend to be applications for specific use cases only. So far the closest implementation for general cloud-native storage is Rook, which is hosted by the CNCF. While it has an increasing number of storage backends, administration of these volumes is still left up to an operator to perform manually. In practice, this still makes cluster administration a bit more difficult than it needs to be – This is an area where open source implementations for Kubernetes can definitely see some improvements in 2019.

Federation

This has been bubbling in the pipeline for a long time now and seemed to fall to the wayside in favour of other mechanisms for that attempt to answer the question of how we deploy workloads and perform service discovery across multiple platforms (such as service meshes).

While the initial version of Federation has hit a couple of stumbling blocks (mainly how to implement state for an overall master plane on top of multiple Kubernetes master planes, running on the native Kubernetes API), version 2 of Federation has been rearchitected to introduce a new, additional Federation API which can better represent and maintain state about federated resources on a multi-cluster platform on an individual cluster, as well as on a higher Federation level.

This project appears to be picking up some momentum, with it possibly achieving GA in 2019. As the Federation API achieves greater stability, it will be interesting to see if more case studies merge for using Federation to propagate workloads across multiple clusters.

Better Kubernetes Support

As Kubernetes picks up steam as a mainstream platform for deploying containerised workloads, a by-product of this is, of course, an ever-increasing number of security alerts for vulnerabilities discovered in its codebase. Part of Kubernetes success story is the sheer amount of development effort that has been directed towards it, which promises a new major point release around every three months.

The expedited evolution of Kubernetes has also meant that the lifetime of a major point release is limited to around nine months – The last three major point releases are supported at any one time, so with the emergence of 1.14 in the next month, this will mean clusters that starting running 1.11 from July 2018 will need to upgrade to a newer major point release to receive security updates related to the Kubernetes control plane and Kubelet.

As more enterprises convert to trusting Kubernetes to use it to run more and more production workloads, I believe this will necessitate the introduction of an additional release cycle to provide a version of Kubernetes with long term support that will enable cluster operators to perform minor point releases of older production Kubernetes clusters.

Well that’s everything for now - we can’t wait to see how Kubernetes progresses over the next few months, and we’ll be curious at the end of the year to see which (if any) predictions came true.

Thank you for reading

Do you need help with a Cloud Native or Kubernetes implementation? Get in touch and let's work together.

Contact Us

At LiveWyer Labs we innovate through research and development, see what else we've been working on lately.

If you want to stay up to date and be notified when we post new and exciting content, make sure to follow our Linkedin and Medium.