With the launch of Project Pacific at VMworld last month, VMWare is taking the bold step of integrating the popular open source container engine Kubernetes with its own flagship virtual machine manager vSphere. Even though VMware is yet to make the product available and share the commercial details, executives talked about the technical architecture at length at the conference, and in other forums.
The design of Project Pacific is complex! Veteran Kubernetes users may get confused with vSphere-specific terminology while traditional vSphere admins may not understand the core concepts of Kubernetes infused into vSphere.
Here is an attempt to demystify Project Pacific based on the information available in the public domain.
vSphere Admins and Kubernetes Developers use the same control plane
With Project Pacific, vSphere admins and Kubernetes developers get to deal with the same control plane presented in a different form.
vSphere APIs are augmented to support Kubernetes nomenclatures like Namespaces and Pods. Kubernetes users will see quite a bit of customization in the form of custom objects backed by custom resource definitions (CRDs), Custom Controllers, and Operators. But, in the end, a resource deployed through either APIs will be be visible in vCenter.
vSphere administrators and Kubernetes developers use the same control plane but from a different set of tools. Kubectl is emerging as a tool that packs a lot of punch. With Project Pacific, you can technically manage the entire vSphere stack with YAML files and Kubectl.
VMware is trying its best to ensure that the terminology and workflow are as close as possible to traditional vSphere operations and modern Kubernetes operations.
Project Pacific can provision three types of deployment units from single control plane
What can you request from the Project Pacific control plane? Well, out of the box, there are three major deployment units that can be launched:
- Virtual Machines and cluster of VMs
- Kubernetes Clusters
Project Pacific comes with a Kubernetes controller for vSphere virtual machines. Similar to how a Pod is defined and submitted to Kubernetes master, a YAML with the definition of a VM is submitted to the control plane that can spin up a VM or a collection of VMs as a cluster. When Kubernetes receives a request for provisioning a VM, it simply passes the control to vSphere through the controller that can manage the entire lifecycle of the VM.
Project Pacific is actually a meta cluster in the sense that it can launch multiple Kubernetes clusters. VMware treats Kubernetes clusters as a single unit of deployment. Imagine creating a long YAML file that has all the parameters used with Kubeadm and submitting it to Kubernetes that can span another cluster. That’s exactly what the control plane does through the open source Kubernetes Cluster API. With a bit of customization, you would be able to launch different flavors of clusters based on Pivotal Kubernetes Service, Red Hat OpenShift, and plain vanilla clusters based on the upstream codebase. This is the most interesting aspect of Project Pacific.
Apart from VMs and Kubernetes clusters, you can also launch standalone Pods. I call them standalone Pods because you don’t need to schedule them inside a Kubernetes cluster. They live on top of vSphere. These native pods are actually containers that comply with the Kubernetes Pod specification.
So, whether it is VMs, Kubernetes clusters, or Pods, you always use a YAML file to define the resource and provision it through Project Pacific control plane.
Serverless Pods can be deployed through Project Pacific
Serverless container services are getting a lot of attention. AWS Fargate, Azure Container Instances, and Google Cloud Run are examples of cluster-less, serverless container platforms. You can package an existing Docker container in an expected format and launch it in the cloud in seconds. Knative is one of the open source projects to implement serverless containers on Kubernetes.
ESXi Native Pods closely resemble Azure Container Instances and Fargate but the key difference is that these services don’t follow Kubernetes Pod specification.
ESXi Native Pods lay a solid foundation for serverless implementation for Project Pacific. By extending Knative-like semantics, VMware can easily launch scale-to-zero Pods on vSphere.
Native Pods use Supervisor Cluster and Spherelet – an equivalent of Kubernetes Scheduler and Kubelet
To enable ESXi Native Pods on vSphere, VMware created a default cluster called the Supervisor cluster that’s embedded into the core of vSphere. For Native Pods to talk to the Supervisor Cluster, they also created Spherelet, a proprietary version of Kubelet that runs on every worker node in a Kubernetes cluster.
Native Pods run on ESXi directly instead of using a Linux VM. VMware has created a Pod-optimized lightweight VM exclusively for the these resources.
The combination of Supervisor Cluster and Spherelet elegantly mimic Kubernetes master and worker nodes. This is one of the best design decisions from VMware.
The future of vSphere Integrated Containers and Photon OS is ESXi Native Pods
When containers started to become popular, VMware invested in two projects: vSphere Integrated Containers and Photon OS.
vSphere Integrated Containers took the first step in bridging the gap between VMs and containers. Like Project Pacific, VMware exposed Docker API for vSphere to manage the lifecycle of VMs running containers. The architecture uses one VM per container to achieve isolation and control.
Photon OS is a lightweight operating system that’s highly optimized for containers. It is modeled around CoreOS, Intel Clear Linux, Red Hat Atomic Hosts, and Ubuntu Core. Photon OS powers the VMs running the containers in the vSphere Integrated Container platform.
Project Pacific supersedes these two approaches by augmenting Docker API to Kubernetes API and the vSphere Integrated Containers to a Kubernetes Pod.
For more information on Project Pacific, check out this post by Frank Denneman, a VMware chief technologist in the Office of CTO of the Cloud Platform.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.
VMware is a sponsor of The New Stack.