Pivotal Container Service Hardwires Cloud Foundry, Kubo to Google Cloud

In a move that could help cement Kubernetes’ position at the heart of the container deployment market, VMware jointly announced, with Pivotal and Google, the availability of a commercial container deployment and management system based on the Kubo project, called Pivotal Container Service (PKS).
This service promises to make it trivially easy for developers — especially with Cloud Foundry — to instantly deploy containerized applications at huge scale with not much more than a YAML file and a kubectl command.
“PKS is always going to contain the latest stable release of Kubernetes,” announced Pivotal CEO Rob Mee [pictured above], during the Day two keynotes at VMware’s VMworld in Las Vegas, “keeping constant compatibility with Google Cloud Engine. It’s engineered to be incredibly efficient to operate. It’s got NSX built-in, so it has a strong focus on application security. And it comes out of the box with integration with Google Cloud and Google Cloud Platform Services.”
Kubernetes is an open source container orchestration engine developed by Google and now managed by the Cloud Native Computing Foundation; NSX is VMware’s virtual networking and security platform.
“Being able to do this constant compatibility,” said Sam Ramji, vice president of product management for Google Cloud Platform. “All of you running data centers know that inconsistency is the enemy. So if we can deliver you the same service, at the same time, every time this incredibly fast-moving community updates its software, we will give you a common service that runs the same way, that will make your lives easier. But every application needs services to run. So we’re also putting Google Cloud Services directly into PKS.”
As Ramji [pictured above, seated to the left of Pivotal senior product director Richard Seroter] later told The New Stack, PKS will also feature the Open Service Broker API for connecting multiple back-end systems in a consistent way. This inclusion will come by virtue of being built on top of the BOSH release management tool — which he called “a gift from the Cloud Foundry community.”
The Special Virtues of Tight Integration
Tuesday’s developments certainly present the appearance of a clear preference. As VMware representatives demonstrated on stage Tuesday, it is now possible for an operator to instantly instantiate a Kubernetes cluster from vSphere, the company’s long-standing virtualization management platform. After the proper credentials are shared with developers, it becomes possible for any one of them to use kubectl to instantiate an application from a YAML template, just like always, and immediately have that application be deployed automatically to that cluster, with Pivotal’s Kubo handling the automated deployment in the background.
What’s critically important here is that NSX, VMware’s network virtualization system, is in place to spread out available infrastructure across on-premises and Google Cloud resources. Last year at this same show, VMware pitched vSphere Integrated Containers, Photon Platform, and its new partnership with IBM Cloud as its triple-threat to deliver NSX into the enterprise. This year, VMware is making an entirely different case for NSX delivery, starting with Monday’s announcement of a tight partnership with Amazon, followed Tuesday by an equally strong partnership with Google, facilitated by Pivotal (a sister company with VMware in Dell Technologies).
“The important thing here is that we’re articulating a unique, hybrid partnership,” James Watters, Pivotal’s senior vice president for the PKS product [pictured right], told a news conference Tuesday in response to a question from The New Stack. “The important acknowledgment between Google, Pivotal, and VMware is that Google has some expertise of how to provision and manage what Kubernetes consumption should look like. And I think that’s the really big thing they bring to this partnership: As the creators of Kubernetes, they’re validating our design, and have engineers on PKS to make sure to make sure we continue to be in lock-step. And that’s why I think it’ll be the most compelling way to run Kubernetes on-premises, with Google’s guidance — and then of course, with everything VMware’s baking into this, making sure that there’s no better way of running this of running this on VMware, because VMware’s involved.”
“You have to hold two things distinct: There’s a community activity around interoperability, purity of thought, open source, and open systems that are multi-cloud and hybrid. Then there’s the commercial application of that.” — Sam Ramji, Vice President for Product Management, Google
Watters reminded reporters and analysts about how Kubernetes’ developers are continually innovating the orchestrator’s APIs, increasing the vernacular of functions that may be invoked through kubectl. “Every quarter, that kubectl API surface has all these new features that developers want to consume very directly. As an example, if you want to update your code, there’s a command kubectl rollingupdate, which allows you to roll your code out gradually across containers. There’s a whole ecosystem of custom resource definitions that are showing up in that… API. Once you realize that has network effects, and is the locus of where developers want to consume Kubernetes, you realize that just layering something arbitrary on top of it that blocks it from accessing Kubernetes’ APIs is probably not what the market wants.”
PKS promises to present developers and operators with a continually updated Kubernetes platform, itself brought up-to-date by means of Kubo. This way, any new API features that are officially released, become instantly available in PKS.
Where Interoperability Ends and Certification Begins
This particular level of functionality may automatically give one commercial Kubernetes platform an advantage over the others. That’s a fact that was not lost on Google’s Sam Ramji, responding Tuesday to a question from The New Stack.
“As we get to commercialization of a set of products from Google and Pivotal — and thereby VMware and Dell Technologies — you have to hold two things distinct: There’s a community activity around interoperability, purity of thought, open source, and open systems that are multi-cloud and hybrid. Then there’s the commercial application of that, which takes dedicated engineering, professional services, and an ecosystem of a size which can actually land those — and if you’re going to go on-premises, it also includes an ecosystem of OEMs, whether you’re looking at [Google Cloud Engine] for infrastructure or something else.”
Ramji continued that the commercial agreement between the four companies specifically enables the engineering to take place to move workloads between different classes of data center environments, or — as he said may eventually become more common — clone those workloads to assist in scalability.
A commercial agreement is specifically necessary, he said, to enable Google Cloud Services — for example, its Bigtable service for distributing massive NoSQL workloads — to extend across Google’s boundaries into customer premises and customer-owned or -leased environments.
Yet when pressed for clarification on his statement about the “community activity around interoperability,” Ramji agreed that interoperability is effectively the job of the open source community — a job that transforms once the products of those interoperability efforts are incorporated into commercial products.
“Standards are a practice of the community that creates the necessary conditions for interoperability,” Google’s Ramji told The New Stack. “Commercial agreements give you the sufficiency because you actually have to pay people to do all the testing. So it should be enabled, but we’re going to improve it and we’re going to stand behind it, as much as we improve and stand behind Pivotal Cloud Foundry with Google’s site reliability engineering organization.” (Prior to joining Google, Ramji was the first CEO of the Cloud Foundry Foundation.)
Put another way, open source interoperability initiatives are all well and good, up until the point where they need positive cash flow to be able to fund the engineering necessary to make platforms workable at massive scale across multiple clouds. It takes deals of this magnitude to bring Kubernetes out of its infancy, enabling it to become part of the product that can be certified, and which IT operators may be professionally trained to support.
Whither Photon
Even vSphere Integrated Containers (VIC) — one of the highlights of last year’s VMworld — is not really enough to bring Kubernetes at that scale into fruition, as VMware’s own cloud-native apps general manager Paul Fazzone admitted during Tuesday’s press conference.
VIC, as Fazzone [pictured above, seated to the right of Pivotal’s James Watters] told analyst Kurt Marko, “is suitable for basic application repackaging use cases, where container orchestration and scheduling is not required. As soon as you get into any more complex use cases or scenarios, PKS includes the Kubernetes scheduler. So you’ve got a full-blown container orchestration and scheduling mechanism, now on top of vSphere, with deep integration into NSX, so that as you’re scheduling those collections of pods that support your specific application, not only is it getting deployed on your vSphere-based infrastructure, but the NSX technology is automatically setting up the network connectivity and the security policy to tie all those microservices together in an enterprise-compliant way.”
So does VIC play a role in PKS? And what happens to VMware’s Photon Platform, which as recently as last February appeared to be the company’s next-generation container platform, for enabling NSX-based deployments outside of vSphere?
“As we get into this new age of container-based applications,” Fazzone responded to The New Stack, “vSphere will continue to support a wide range of traditional and more modernized operating systems, like some of these container frameworks. And we’re working with a number of partners on that front.
“VSphere Integrated Containers, specifically, was developed to start to bring native container host and instance functionality to vSphere specifically,” the general manager continued. “So you will see that continue to develop as a core feature set of vSphere itself. It’s included in vSphere… you get that basic capability built-in. [But] it does not include a higher-level container scheduler or orchestration mechanism. That’s where PKS comes in. PKS is a full-blown, Kubernetes-based solution, focused on the Day 1 deployment and development experience, as well as the Day 2 capabilities necessary to operate a Kubernetes application in a production environment.”
In a later interview for an upcoming appearance on The New Stack Makers podcast, Fazzone notably referred to Photon Platform in the past tense. Although he could not provide specifics at this time, he isolated two components of that platform — its Photon Controller multitenant control plane and its Photon OS minimalized container-based Linux — as two components that could potentially be utilized as “learnings” in what may be a later transition to the Pivotal Cloud Foundry platform.
Rather than a single open-source ecosystem around container orchestration, what we’re seeing develop — and rather suddenly — is a competitive market around scalable application deployments.
Cloud Foundry Foundation, Google Cloud and VMware are sponsors of The New Stack.
Images from Scott Fulton.