Modal Title
Containers / Kubernetes / Operations

The Next Evolution of Virtualization Infrastructure

Developers can use features like monitoring, pipelines, GitOps, serverless, service mesh and more, whether the target workload is a container or a VM.
Nov 1st, 2022 6:29am by
Featued image for: The Next Evolution of Virtualization Infrastructure
Image via Pixabay.

Virtualization is entering a new age, a fourth evolutionary epoch in which the benefits of data center consolidation and workload standardization are accelerating the move to cloud native environments. Like any evolving organism, virtualization must adapt.
The first age of virtualization was the great data center leap into the concept. VMware pioneered virtualization with the ESX hypervisor and dominated the marketplace, quickly bringing the many benefits of virtual machines to enterprises around the world. While many challengers arose, none were able to wrest their stranglehold on the traditional virtualization market.

The second age of virtualization was the move to the cloud. It was no longer about running VMs only on premises; it was now about running them in public clouds like Amazon Web Services, Azure, Google and more. Users could ship their virtual machines to the cloud on demand instead of filing a ticket and waiting for IT to provision their VMs on data center infrastructure.

Entire teams of developers broke ranks with internal IT and began to flee to the “I-want-it-right-now” world of cloud-provisioned infrastructure.

For applications that were still tied to the data center, alternatives like OpenStack emerged to enable scale-out virtualization infrastructure for both private cloud and many public cloud Infrastructure as a Service deployments. Red Hat OpenStack continues to provide a leading distribution in this space, powering enterprise private cloud and telco 4G network function virtualization environments.

The third age of virtualization was actually a move away from the hypervisor and traditional VMs: the age of containers. Just as virtualization had broken physical servers into many individual virtual servers each running their own OS, by leveraging the power of the hypervisor, so too had containerization divided a single Linux OS running on those virtual machines, or directly on bare metal servers, into even smaller application sandboxes with the power of namespaces, cgroups and the Docker packaging format, now standardized through the Open Container Initiative.

This enabled pioneering developers to build and provision containerized microservices on their local machines and promote them to test, stage and production environments, consistently and on demand. Kubernetes became the industry standard platform for container orchestration and management and enabled this age to flourish.

The continued evolution of these technologies, coupled with the announced acquisition of VMware by Broadcom has many customers assessing the future of their existing virtualization infrastructure and wondering what comes next? We believe we have entered a fourth age of virtualization: the age of evolution and convergence on cloud native platforms.

The Fourth Age

Change is constant throughout all ages of human history, and especially throughout all ages of computing history. Before even virtualization and containerization, giant all-in-one time-sharing UNIX systems were replaced with smaller, cheaper X86 servers. And while predicting the future can be difficult, navigating it always requires constantly moving forward.

We have been getting a lot of questions recently about this evolving virtualization landscape and what it all means for customers’ existing virtualization estate. As it relates to Red Hat’s virtualization portfolio, we made a decision more than four years ago to chart a new cloud native path, built around Linux, KVM and Kubernetes.

Our guiding north star was our belief that Kubernetes had established itself as a key enabler for cloud native application development and was becoming pervasive across enterprise data centers and all major public clouds.

We also knew that while container adoption, enabled by Kubernetes, was accelerating, the vast majority of enterprise applications still ran in VMs, and that containers and VMs would coexist for a very long time.

Over four years ago now, we launched the KubeVirt project to manage virtual machines alongside containers in Kubernetes. Leveraging the fact that the KVM hypervisor is itself a Linux process that could be containerized, KubeVirt enables KVM-based virtual machine workloads to be managed as pods in Kubernetes.

But what does that mean from an architectural perspective? It means that you can bring your virtual machines into a modern Kubernetes-based cloud native environment, without requiring the actual application to make the jump to containers itself.

While many VM-based applications have already migrated to containers, not every application that runs in a VM has made the move or is even suitable to run in a containerized environment. While the third age of virtualization was all about moving containers into VMs, this new age is about bringing the benefits of Kubernetes and cloud native platforms to all applications, regardless of where they live.

While being able to manage both containers and VMs together on a common Kubernetes platform is powerful, what’s even more powerful is the way this enables VM workloads to take advantage of all the new capabilities being built around Kubernetes in the CNCF cloud native landscape. Innovative projects like Prometheus, Istio, Knative, Tekton, ArgoCD and more have emerged from this ecosystem and are fully supported in Kubernetes for both container and VM-based applications.

That means developers can use features like monitoring, pipelines, GitOps, serverless, service mesh and more, whether the target workload is a container or a virtual machine. This enables you to bring your VMs to a cloud native platform, together with your containers.

Where to Go?

There are many inroads to the cloud native way of thinking, but perhaps the best way to get your company started on the transition is to understand how containers and Kubernetes are being used in your environments today. A lot of companies may already be running containers in Red Hat OpenShift and/or other Kubernetes services. What are you doing today?

Understanding how containers and Kubernetes are already being used inside of your organization may help you locate a good starting point for transitioning some existing virtual machines. For workloads running in your data center, Red Hat OpenShift can be installed on bare metal server environments to run both container and VM workloads.

Once installed, we’ve created the OpenShift Migration Toolkit for Virtualization, which provides a path for transitioning existing virtual machines to OpenShift Virtualization. Once brought forward into cloud native infrastructure, these virtual machines can be linked into existing Kubernetes and OpenShift capabilities, such as cloud native platform services, cluster management, storage and more.

We’re always happy to answer questions when the path forward is unclear. Generally, when changes and questions arise, we’ve been of the opinion that moving forward is always a better choice than moving sideways.

While virtual machines are by no means dead, they are certainly becoming more and more a symbol of existing traditional applications, rather than where new cloud native applications are landing. Like the postal system, a lot of information has moved onto newer and technologically superior modes of trade. And we look forward to the fifth age of virtualization and beyond.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.