Kubernetes

Google Officially Launches Kubernetes 1.0, Promises to ‘Give You Evolution’

21 Jul 2015 6:58pm, by

From developers’ perspectives, it was a milestone that had already been reached: Google had pushed its production-ready version 1.0 code for its Kubernetes orchestration system to GitHub last week. But it was a milestone that deserved a special observation, and at the Tuesday keynote at O’Reilly’s OSCON conference in Portland, Oregon, Google stopped for a moment to lay the foundation for its principal marketing message, as it sets out to take command of the most important evolutionary discussion in today’s data center.

Google Director of Cloud Management Greg DeMichillie

Google Director of Cloud Management Greg DeMichillie

“If your developers are spending time thinking about individual machines, you’re operating at too low-level of an abstraction. You want to operate at the level of applications,” said Greg DeMichillie, director of product management for Google Cloud Platform, “and let the system take care of the scheduling of the applications. That means your developers can move faster, you can ship faster, you can iterate faster, your business grows faster. Speed improves everything.”

To have called Kubernetes a “working beta” for this long may seem astonishing for a product that has already found its way deep into the infrastructure of some of the world’s major businesses, including Samsung, eBay, and Red Hat. But it took being there, that deeply, for that long, working at unprecedented scale, for Kubernetes to have gotten as far along as it has.

Today’s announcement came with no surprises about new features, because Google was literally preaching to the choir: The OSCON audience was comprised of many who signaled their direct participation in the Kubernetes project. DeMichillie singled some of them out, including folks from Mesosphere and CoreOS, and gave substantial credit to Docker for moving containers into the public mindset.

“Even we were taken aback, I think, at the pace at which contributors joined and favorited and starred the GitHub project,” he said, “but [also] the rate at which we get pull requests and contributions. We’re running at about 200 pull requests per week, and we have over 400 contributors actually contributing real code into the project.”

We Can Declare This Thing Born Now

For Kubernetes to feel “finished,” DeMichillie remarked, Google’s engineers believed it needed to enable optimization for both stateless and stateful web application hosting. It required giving developers the ability to test applications realistically at large scale, and it needed to be operable at the pace of continuous integration/continuous deployment.

As Google Vice President for Infrastructure Eric Brewer pointed out, achieving this continuous pace while, at the same time, planning for some spare time to be innovative, is a problem that many organizations find themselves facing today … and Brewer did not count out his own.

Google Vice President for Infrastructure Eric Brewer

Google Vice President for Infrastructure Eric Brewer

“It’s fundamental now that you have to have availability,” remarked Brewer. “And the problem with that is really that availability and innovation don’t work well together. If we wanted high availability and we didn’t make any changes, it would be easy. And the problem is, of course, we want to make a lot of changes as quickly as possible.

“So I kinda feel like, the actual role of Kubernetes is not so much in giving you containers,” he continued, “but in giving you evolution. Evolution, in the presence of availability, is the hard part.”

The mindset shift that Brewer discussed is also a substantive cultural shift, which in organizations is always the most difficult type of change to achieve. At the administrative and DevOps level, he repeated — but in a way, also perfected — the argument that administering servers at the level of the x86 hardware platform is not true administration, because no one can see what the servers are truly doing.

The service level is the granularity that administrators should aim for, he continued. Services should have their own names and namespaces, and should identify themselves at a level beyond the physical and virtual machine. He argued (once again) that services and microservices should be given containers that have their own truly exclusive IP addresses, not just in a network but across a very large domain, not discounting the possibility of the Internet as that domain (if IPv6 were employed).

In such a system, Brewer pointed out, common conflicts that arise between virtual machines — for example, contention for access to the same ports — would simply not happen, since exclusive IP addresses would guarantee exclusive ports would always be available. And denying such ports could be a function not of some bolt-on security software, but of the process with which the container is constructed. Don’t want a port to be open to a process? Shut it off for that process’ container and keep it off.

“Get Your War Paint On”

Of course, he noted, these are not functions that are included in Kubernetes now. In order for features at that level to be worked out, open discussions between the system’s most dedicated hyperscale users must be encouraged. Brewer alluded here to the creation of the Cloud Native Computing Foundation, which Google will evidently steward, and which will be overseen by the Linux Foundation.

Only through a forum like this, Brewer believes, will participants be able to work out the details for what he portrayed as desperately needed features for the entire container ecosystem: for instance, parameterization.

“You need to have sharing among containers,” the Google VP said. “We share namespaces, we share file systems. If it’s a hierarchical structure of containers, in the Linux container sense, we can share that as well. But we definitely want some fine-grained, fast sharing … Longer-term, if you have ways to do this now (I think it’s a little hokey, frankly, using environment variables), you need ways to say, ‘This is a reusable building block, and therefore it needs to have some parameterization. I’d like to be able to deploy a container slightly differently, in different contexts.’

“We can do that now,” he continued. “I don’t think it’s where it needs to be. That is definitely something I’d like to see us evolve this coming year.”

In a blog post Tuesday, Mesosphere Senior Research Analyst Derrick Harris voiced his group’s appreciation of Kubernetes’ accomplishments.

“Mesosphere is a big fan of Kubernetes,” Harris wrote. “We have supported the technology since Day One and, earlier this year, worked with Google on an enterprise-grade version of Kubernetes that runs on our data center operating system [DCOS]. All of our work on Kubernetes has been upstreamed into the Kubernetes open source repo. We also have Kubernetes project committers and a dedicated team working on the technology.”

Google’s keynote address Tuesday afternoon with Greg DeMichillie was preceded, perhaps intentionally, by a little conference mood music. Playing over the loudspeakers was a song by the indie pop band “The Mynabirds,” entitled, “Generals.” “We’re burning money in our homes, our books and bones are breaking down so fast,” intones lead singer Laura Berhenn. “And they keep putting all our cash into the next bloodbath, and I tell you I am sick of it. How long we paid our dues?

“You wanna fix it” the song continues prophetically, “or f___ it up? We’re gonna fix it, ‘cause it’s been f_____d.”

CoreOS, Docker and Red Hat are sponsors of The New Stack.

Feature image: “The Mynabirds – Generals.”

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.