Open Source Summit: Kubernetes as the New Linux

Even as the Linux Foundation celebrates the ongoing success of its open source operating system kernel, the ever-shifting technology landscape may shuffle operating systems aside to put another player at the center of the proverbial new infrastructure stack: the container orchestration engine. For it is the orchestration engine that lets the developer think about the application itself, and not worry about the underlying OS.
“Every market Linux has entered, it has completely dominated,” noted Jim Zemlin, kicking off the Open Source Summit taking place this week in Los Angeles. Supercomputers, embedded systems, mainframes, cloud computing services are dominated by Linux, as is the mobile market where the Linux-based Android holds 82 percent of the market.
With Linux, there are currently, 4,300 developers working on the code base today, adding 10,000 lines of code a day, and modifying another 2,000 lines, and removing 2,500 lines of code. The code base changes eight times an hour.
This year, for the first time, Linux has superseded Unix in the on-premise enterprise server market, and is just behind Windows, Zemlin asserted. And in March, thanks largely to Android, Linux-based clients now comprise the majority of clients on the Internet, surpassing Windows for the first time.
“You know what that means?” Zemlin asked. That’s right, 2017 is the Year of the Linux Desktop, he said, much to the merriment of the audience.
Software stacks have gotten too complex to be mapped onto a common monolithic namespace @llunved #OSSsummit pic.twitter.com/3A1FyqXym9
— The New Stack (@thenewstack) September 11, 2017
But even as Linux enjoys another metaphorical victory lap, it finds itself increasingly in a new, perhaps diminished role in our emerging cloud-native era.
In one Open Source Summit session, Daniel Riek, a Red Hat senior director of systems design and engineering, explained that the role of Linux distributions is changing radically, thanks to the container and cloud-native technologies.
Those who have been in the industry for awhile have always assumed that, in the modern software stack, that the operating system is a key part of the infrastructure. The cloud-native view of computing, however, puts the application as the center, making the primary role of the OS as something apart from the infrastructure. Rather the purpose of the OS is simply to provide a common runtime for the app.
Linux distributions are largely collections of third-party libraries and packages, which, once installed on a PC or server, tended not to get updates. Binary packaging schemes such as RPM and up2date, yum, and apt provided a way to standardize deployments across servers, though this also led to dependency hell, in which the user could never use that right version of this library or that application that was available on the distro, or that a single machine may require two, conflicting, versions of the same library.
Virtual machines brought some uniformity to the proceedings, allowing organizations to build out pre-built images, basically placing one service on each VM. And VMs made it easy to share hardware across different services. Plus a single VM could run across development, testing and production. This reduced dependency hell somewhat, though it created a new problem of VM sprawl, where VMs would be left running long after they fulfilled their original purpose.
“You move to centralized control to solve scalability problems that you’d have with so many machines,” Riek said. The problem with this management-at-scale is that the problem with patching packages has not got away. In fact, it has gotten worse.
In the meantime, the sheer number of libraries, and supporting programs started to overwhelm Linux distributions. A distribution can be composed of tens of thousands packages, at least some of which are outdated as soon as they are shipped. Adding to the complexity was the fact that developers used their own preferred versions of the dependencies, rather than using what is included in the OS distro.
“We see diminishing returns in Linux distros at that level of complexity,” Riek said. “There is no point in trying to repackage 800,000 upstream packages in RPMs. You can hire half of Europe and still wouldn’t catch up. The frozen binary distribution is not scalable at this level of complexity.”
Containers, in particular, helped developers think in terms of application-centric runtimes, which offered a maximum amount of flexibility with minimum overhead. With containers, users share the kernel but they get their own isolated namespaces. “It turns Linux back into a multi instance, multiversioning environment because suddenly I have isolated namespaces where I can install whatever I want and I can’t break anyone else’s,” he said.
Today’s containerized applications will typically consist of a group of containers, often controlled by a container orchestrator such as Kubernetes or Docker Swarm. A container will only hold a few services (ideally only a single service). Containers offer the binary-level reproducible builds, with built-in automation and transportation mechanisms. By default, every application is a distributed application.
“Kubernetes is a missing piece for a standardized orchestration model that turns my Linux machine that runs binaries into a full-scale out cluster environment that can run a containerized applications consisting of multiple containers,” Riek said. “I define my application with a service definition. Kubernetes deploys the containers. High-availability becomes just an artifact of that environment.”
This approach paves the way for developers not to think about containers at all, but rather about the applications themselves. They think of “deploying a database service,” and so forth. “The application doesn’t know where it’s running. Kubernetes abstracts that away,” he said.
Linux Foundation and Red Hat are sponsors of The New Stack.