Cover image for LiveWyer blog post: Technical Demo: Going Cloud-native after VMware
Engineering • 3min read

Technical Demo: Going Cloud-native after VMware

A technical demonstration of how to evolve workloads from VMs running on Kubernetes into cloud native container deployments

Written by:

Avatar Louise Champ Louise Champ

Published on:

Last updated on:

Technical Demo Objective

A technical demonstration of how to evolve workloads from VMs running on Kubernetes into native container deployments. Starting from a KubeVirt environment, it walks through selecting a service to migrate, containerising it to production standards, and switching live traffic with zero downtime using Helm and Kubernetes service discovery.

TLDR

You’ve moved your VMs onto Kubernetes. Now what?

Getting your VMs running on KubeVirt is both a cost and infrastructure objective. Containerising those workloads is where the real operational gains come from. Autoscaling, health management, rolling updates, and resource efficiency that a VM-on-Kubernetes setup simply doesn’t give you.

Pick your first service carefully

Not everything is a sensible early candidate. We use four criteria:

  • Technical simplicity
  • Low blast radius
  • Measurable impact
  • Team & operational readiness.

As you will see in the demo, a stateless, non-critical path, easy to instrument, is the perfect choice. If it breaks during migration, checkout slows slightly, but the business will still keep running.

Your first migration is less about the service itself and more about proving the process works and building confidence for all stakeholders.

What actually changes

Startup times drop from minutes to seconds. Resource allocation happens at the service level. Kubernetes handles placement, scaling, and recovery automatically. We also updated the service for production readiness, including structured logging, Prometheus metrics, gRPC health checks, and a multi-stage container build that reduced the image size from over 1 GB to 165 MB.

Zero-downtime switchover

The traffic migration is a single Helm values change. Kubernetes services act as internal load balancers, and updating the selector to point at the pod instead of the VM is instantaneous. Front-end services keep calling the same DNS name without reconfiguration. Our GitOps pipeline remained untouched throughout.

We ran load tests before and after to establish clear baselines. Response times improved significantly at the 95th percentile, which gave us defined rollback criteria that take the human element out of the important rollback decision process.

The Pilot Timeline

Three to four weeks of real project work for a single service and developing a modernisation and migration pattern that scales. Each subsequent migration teaches you something about your environment that makes the next one faster, and those organisational learnings are often worth more than the infrastructure change itself.

LiveWyer VMware Migration Content

This video is part two of our Technical Demos relating to VMware Migration and Modernisation. If this was of interest, you can find part one below which removes the immediate business license risk:

  1. Technical Demo: Migrate from VMware to Kubernetes

  2. Technical Demo: Going Cloud-native after VMware

This video is related to our VMware Migration series with our partners Kubermatic & Portworx. We recommend watching the rest of the videos in the series:

  1. Modernising VMs in a Cloud Native world: Real-world advice from LiveWyer, Kubermatic & Portworx

  2. VMs in Kubernetes: What it really takes to move beyond VMware

  3. From VMware to Kubernetes: Practical Demos & Strategic Roadmaps

Get in touch to chat about your VMware challenges

LiveWyer has worked with numerous global enterprises to help implement robust, sustainable, and elegant solutions, enabling their infrastructure to withstand the test of time.

Chat to one of our engineers about your infrastructure challenges by booking a slot on our website or using the details below to get in touch.