At LiveWyer we’re thrilled to see the velocity of new open source projects that contribute to the Kubernetes / Cloud Native ecosystem. In a new series of blog posts we’re going to be taking a deep dive into the CNCF Sandbox. Each month we’ll be taking on a new project from the sandbox - installing, having a look around and seeing if we can’t manage to do something cool with it - and posting our results. We’re starting with KubeVirt, a tool that can be used to create and manage Virtual Machines (VM) within a Kubernetes cluster.

KubeVirt is intended to be used for VM-based workloads that are difficult to containerise. However, I want to demonstrate using KubeVirt to create a Kubernetes Cluster within a Kubernetes Cluster.

This demonstration will show the manual process for doing so, but in practice we’ll want to automate this process, so that we’ll be able to automatically create disposable and customisable Kubernetes clusters that we can perform automatic tests on. In particular, we’ll be able to automatically test potential changes on the Kubernetes level/layer (i.e. cluster-wide changes) before they are applied to a live Kubernetes cluster.

Setup

For the demonstration I have:

  1. Deployed KubeVirt v0.38.1 onto my Kubernetes cluster. You can find instructions on how to do so here
  2. Deployed KubeVirt’s Containerized-Data-Importer (CDI) v1.30.0 using the steps detailed in the repository
  3. Installed the kubectl virtual plugin to allow me to perform operations on the VMs powered by KubeVirt
  4. Setup a repository containing all resources used in this demonstration

With the above setup, I’ll first demonstrate how to create a VM with two Kubernetes custom resources VirtualMachine & DataVolume. Then, I’ll combine both resources with other tools to deploy a “nested” Kubernetes cluster.

Container Disk Images & DataVolumes

Before we can create a VM, we need a container disk image for our Operating System (OS) of choice. Therefore, I first need to create a DataVolume. Deploying this object will import our chosen disk image into our Kubernetes cluster. You can find the technical explanation in the documentation for the CDI here.

My OS of choice is Ubuntu 20.04. In order to use this OS, we need to find the source for it and include it in the manifest file for my DataVolume. I’ve found the URL for the official Ubuntu cloud image for Ubuntu 20.04 here. This URL is the source we need to include in the manifest file for my DataVolume. The manifest file can be found here.

The video below will show what you’ll see once you deploy a DataVolume.

Once the container disk image has been fully imported successfully, we can now use use the DataVolume to create a single VM and create a single node Kubernetes cluster.

A Single Node Cluster Running in a Kubernetes cluster

With a DataVolume available we can create a manifest file for a VirtualMachine that references it. Below is a snippet that shows how to reference a created DataVolume.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: ubuntuvm
spec:
  template:
    ...
    spec:
      domain:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
      ...
      volumes:
      - name: containerdisk
        dataVolume:
          name: DATA_VOLUME_NAME

An example of an actual manifest file for the VirtualMachine can be found here. Note, there is a placeholder value in the manfiest file for a public SSH key, you need to replace that value if you want to SSH into the VM. The last video in this blog will showcase a method to SSH into a KubeVirt VM in a Kubernetes cluster.

The video below demonstrates the deployment of the VM and connecting to the console of the newly created VM.

Once we have a VM, we can use it to create a single node Kubernetes cluster using k3s.

Dynamic DataVolumes & VirtualMachine“ReplicaSets”

Now that we’re able to create a single node Kubernetes cluster within a Kubernetes cluster, the next step is to create a multi-node Kubernetes cluster within a Kubernetes cluster.

Unfortunately, we cannot use a single DataVolume for multiple VMs, so we’re going to dynamically create DataVolumes whenever a new VM is created. We can do this by adding a DataVolumeTemplate in the manifest file for the VirtualMachine.

Ideally, we would want to use a replica set to create multiple VMs. However, as of version v0.38.1 (and the time of writing) a VirtualMachineReplicaSet custom resource does not exist (you can find the specification for v0.38.1 here). There is a VirtualMachineInstanceReplicaSet but it does not currently support DataVolumeTemplates.

As a workaround, I’ve created a helm chart that can deploy multiple identical VMs, with each one using a DataVolumeTemplate to dynamically create DataVolumes.

The video below shows the deployment of this helm chart. Note, it may take roughly 15 minutes for all the data volumes to finish importing the image.

A Multi-Node Kubernetes Cluster Running in a Kubernetes cluster

Now that we have multiple VMs running inside our Kubernetes cluster, we can use them to create a multi-node Kubernetes Cluster. To do so, we’ll be using the k3sup tool, which requires us to be able to SSH into every VM.

To SSH into these VMs, I’ll be using a pod with the linuxserver/openssh-server image. This pod will need to be in a node with no VM (pods) running in it, otherwise you’ll not be able to SSH into all the VMs from within the openssh pod.

You can find an example manifest file for the pod I used here (you’ll need to replace the placeholder value for the node name) and the process for using these VMs to create the Kubernetes cluster is shown in the video below. Before the recording, I executed into the openssh pod and installed k3sup and added the required SSH key.

As showcased in the video above, we were able to successfully create a three node Kubernetes cluster within a Kubernetes cluster. It may seem like a novel idea to have such a cluster, but we believe this novel idea can be used effectively to test potential changes on the Kubernetes level/layer.

Unfortunately, changes with a cluster-wide scope have the potential to have a negative impact on all (or most) workloads running in the cluster. For example, a change that misconfigures:

  • the Container Networking Interface (CNI) plugin will result in pods not being able to communicate with each other. Workloads that require this functionailty will crash or cease to functional properly as a result
  • the logging stack may result in logs no longer being stored
  • Istio (if the cluster uses it) may result in traffic not being routed to any service. As a result, no one will be able access any application running within the cluster

With the automatic creation of Kubernetes clusters and the ability to perform automatic tests on them. We’ll be able to effectively test changes on the Kubernetes level/layer and significantly reduce the risk of changes that will break the functionality of a live cluster being applied.

This is as far as we’re going to go with KubeVirt today, but we’ll be trying to make use of Kubevirt for our internal projects and keenly following the project to see how it matures. If you try this out yourselves, or there’s anything you feel we should have done differently, then let us know in the comments.

We’re looking forward to getting familiar with some more CNCF Sandbox projects as part of this blog series, and we’ll be updating this post in due course with links to the others. Though as a sneak preview I can say that the next one will be about litmuschaos.

Need help running Kubernetes?

Get in touch and see how we can help you.

Contact Us