SaltStack has come a long way since last year when I interviewed the team at OSCON. I caught up with them this past week at VMworld and did a quick interview about how SaltStack is playing a major role in Google Kubernetes, the project announced two months ago that brings Google’s expertise in operating applications across thousands of machines.
In the interview, SaltStack’s Matt Merservy briefly discusses how the management platform rapidly accelerates compute resources in Google Kubernetes and test apps in a secure environment and scales them back down as necessary. It’s an important aspect of Kubernetes but it should be noted that SaltStack is not used to schedule containers. The container scheduling is Google code, said SaltStack’s Vice President of Marketing Rhett Glauser. “However, outside of Kubernetes, SaltStack was one of the first to provide deployment and management of large scale Docker and container environments.”
SaltStack is also beng used by Microsoft Azure as the the core orchestration management engine within Kubernetes, giving it a larger role in this new ecosystem of orchestration technologies. But how “big” a role SaltStack plays in Kubernetes depends who you talk to. There will be other ways to provision Kubernetes in the future. For example, the CoreOS instructions for running Kubernetes uses a similar method called cloud-config, which CEO Alex Polvi described in an email as much more like “arguments to a running program, versus a script that sculpts a server into something else.”
The CoreOs approach points to a more important story that relates to the changing landscape for management platforms and the break from existing configuration management environments. SaltStack reflects this shift as do other emerging services such as Ansible. Docker and platforms such as Kubernetes are catalysts for the shift, changing what is required to stitch cloud services together into what Wired’s Cade Metz calls one giant computer. That’s a similar description that we hear from people like Paul Maritz, Pivotal’s CEO. And it’s Maritz who led VMware for many years before spinning out Pivotal from EMC. Now it’s Pivotal and VMware working with Google Kubernetes and a host of others to make this giant computer a reality. It’s happening but not necessarily in ways we expect. Client/server systems needed configuration management. This need gave credence to Puppet and Chef as well as newer services such as SaltStack.
But with the rise of distributed systems comes the need for ways to rethink these methods and search for alternatives that can manage these big computers more efficiently. Looking to Google, they have Borg, but that’s a Google-scale system and it became apparent that it needed better cluster management. So they created Kubernetes as an open source project that moves from static to dynamic management so clusters can be managed in what they call “pods.” These pods act as clusters of containers that act as their own networks.
Now with this new way of managing “big computers” comes differences in how the applications get managed. It’s not so easy to go through this process of configuring the code to make it work. It’s far easier to just think of it all as a data structure that can be removed and replaced with a new one. That surfaces all kinds of questions about security (think attestation), networking (think Open vSwitch, etc.) and storage (think software-defined) and new ways of building loosely coupled systems (think microservices).
The question now is what will become of these new services as it becomes popular to shift to more ethereal environments. For example, the concept of immutable infrastructure will lead to more mental constructs about configuration. From a few conversations, and from reading posts by smart people like Chad Fowler, it’s apparent that it just takes too much time to manage the numerous issues that come with managing all the different parts of an instance. Software upgrades should not have to be a concern. Things just get replaced. And if the data structure can just be replaced, as he stated on InfoQ, then a lot of the configuration adjustments become unnecessary. This correlates to the rise of functional programming languages which provide the means for creating immutable programs with single assignment variables, writes Fowler.
So why not take this approach (where possible) with infrastructure? If you absolutely know a system has been created via automation and has never changed since the moment of creation, most of the problems I describe above disappear. Need to upgrade? No problem. Build a new, upgraded system and throw the old one away. New app revision? Same thing. Build a server (or image) with a new revision and throw away the old ones.
The need for more automated, immutable infrastructure will change the way developers build and deploy apps. For now, SaltStack is gaining attention as a chosen way to boot and configure distributed systems. But the future may be something that is far more automated, more Borg-like and comparable to automatic, programmable robotic clusters.