Still up in the air (if you’ll pardon the metaphor) is the matter of whether a preferred public platform for container deployment will emerge, out of the melee that is today’s market. Wednesday, Google took its next step in its bid to produce “the” container ecosystem, lifting the “beta” tag from its Google Container Engine service.
Container Engine is effectively Google’s public product-ization of Kubernetes as a service. It gives full credit to Docker for popularizing the concept of containers. But it builds a business model not so much around containers as around orchestration.
Beginning November 1, Container Engine’s pricing model kicks in, with a simple charge of 15¢ per hour for each cluster after the fifth one on an account. The first five clusters remain free, giving developers time to grow accustomed to the system before deploying production-ready applications.
But let’s be honest: Since the whole point of Kubernetes is to enable multiple applications to be deployed simultaneously and with high scalability, those first five clusters will be absorbed rather quickly, and will remain absorbed even as developers seed their container environments with new test applications. This “first five free” deal is just for getting acquainted.
Container Engine is the container deployment vehicle attached to the Google Developers Console. Through the console, a developer creates a cluster by declaring its specifications, including its preferred resource zone, and the number of machine types (VMs with virtual CPUs) assigned to it. Once the order is given to Google, the cluster creation process takes a few minutes.
For now, clusters on Container Engine are comprised of conventional virtual machines. In the modern world, this still seems pretty practical. Cloud infrastructure platforms are, at their core, mainly a pool of virtual machines. But whether this model is analogous to what Google manages for its own internal use is still only glossed over. We do know that the company launches millions of containers for its own purposes per week, but it does not yet appear that those containers are hosted on something like a private version of Container Engine — which is really a containerized version of Compute Engine.
We do get a peek at the type of environment Google is driving at for an ultimate goal from Craig McLuckie, the co-creator of Kubernetes and Google’s senior product manager. When Kubernetes 1.0 was launched in July, McLuckie painted a picture of a world where this underlying infrastructure of conventional VMs had ceased to exist.
“Dynamic scheduling” — a key component of Google’s definition of “cloud-native computing,” as McLuckie described it at the time — is “being able to actively and dynamically map a piece of code to a physical piece of infrastructure without operator intervention — moving away from a world where you think about physical infrastructure, in terms of machines or virtual machines, to a world where you think about logical computing resources, where I just have a sea of compute, and my containers get scheduled for me by a smart system.
“Because it turns out there are some things that computers do better than people,” McLuckie continued, “and one of those things is really, thinking about, in real time, where your application should be deployed, how many resources your application should have access to, whether your application is healthy or unhealthy, whether some level of remediation needs to happen.”
“And by moving away from a world where it’s an operator-driven paradigm, where you’re creating these static, dead things, to a world where your application is alive, and being actively managed, and dynamically, reactively, observed and watched by a very smart system, [that] changes the game.”
“It creates incredibly higher resource efficiency … but it also creates radically lower operations overhead. It creates these far more agile environments where your developers just focus on developing.”
Today, Google borrows an idea from Hollywood in explaining the relationship between the goal McLuckie laid out in July, and reality. “Inspired by Google’s experience with building and running container-based distributed systems,” the company’s product page reads, “Container Engine re-imagines some of Google’s most powerful internal systems, so that you can develop and manage containers the way Google’s engineers do.”
From the developer’s perspective, the VM-centric world of today’s IaaS and the container-centered world McLuckie described may be just one menu selection removed from one another. Selecting a cluster size based on conventional VMs is not that difficult a process, nor is it confusing to anyone. In fact, you could argue that if Container Engine were to remove that selection in the future, leaving it for the deployment console to determine the composition of the infrastructure hosting the containers being deployed, they may become confused as to what’s going on under the hood.
Google is the steward of the Cloud-Native Computing Foundation, which is trying to shape the orchestration landscape in a manner that’s perhaps a bit less dependent on Docker. So it’s up to Google to decide whether to take Container Engine in the direction outlined by McLuckie — a sort of zero-click deployment — or keep the “-as-a-service” model, in order that its key infrastructure commodities can be perceived as competitive against Amazon AWS and Microsoft Azure.
Docker is a sponsor of The New Stack.
Feature image: “The Container Ship ‘MSC Chicago’ Entering Savannh Harbor (GA) July 2012” by Ron Cogswell is licensed under CC BY 2.0.