CoreOS, Says Red Hat, Will Help Introduce OpenShift to Operators

There are now two concurrent definitions for the term “operators” in the context of Red Hat’s container infrastructure: Obviously, there is the body of users who specialize in the deployment and maintenance of developed software — the “Ops” part of the portmanteau “DevOps.” Then there is the emerging concept of an “operator” for Kubernetes: a component that automates the deployment of an application being managed by, or connected to, a Kubernetes cluster.
The latter is a type of automation for applications or other components, such as a Cassandra database, that the Kubernetes orchestrator would seek to manage — a kind of “instruction manual” that opens up those components to instruction from, for example, the kubectl command line tool. It’s a system that CoreOS released in late 2016, and which it was preparing to formalize as the Operator Framework until Red Hat’s acquisition of CoreOS last February.
The operator is one outgrowth of one of CoreOS’ automation system for Tectonic. Another was an under-the-hood, over-the-air (OTA) automation system capable of zero-click live updating of the Kubernetes core of its Tectonic commercial platform. That’s the part that CoreOS CTO Brandon Philips boasted back in 2015 separated the entire CoreOS platform from OpenShift and its other competition.
Over the Air
Now that the CoreOS development team has been welded into the Red Hat organizational chart, as Red Hat OpenShift product strategy director Brian Gracely told The New Stack, it’s Philips who officially wins the arm-wrestling match. The OTA updates from Tectonic will replace the Ansible-based system in future editions of OpenShift.
“First and foremost, the Tectonic console was built to be Kubernetes-aware,” admitted Gracely. “Whereas, when we adapted other tools in the past, they would understand certain Kubernetes concepts, but sometimes it wasn’t as granular as maybe you needed. ‘Is my cluster up and running?’ Yes. Second-, third-level health checks of what’s really going on in that cluster, how well are the nodes being managed, those types of things — you’re going to get more granularity out of what the Tectonic console provides you.
“The CoreOS and Tectonic team were much further along,” Gracely continued, “in using Prometheus monitoring technology than we were at Red Hat. Several of the original Prometheus team members had joined CoreOS. They had made that a first-class citizen for how they did monitoring of the platform. We were evolving our monitoring towards Prometheus within OpenShift, so the level of experience and the depth at which they were collecting stats, building graphs, presenting the information was just further along, more mature than we were in the OpenShift world.”
In a press conference earlier this week at Red Hat Summit in San Francisco, Red Hat’s vice president and general manager for OpenShift, Ashesh Badani, told reporters that OpenShift users should expect to see Tectonic’s OTA system appearing in their platform fairly shortly.
“Customers. . . liked the fact that Tectonic was focused on these over-the-air upgrades, day-2 operations management, and some of the technologies around monitoring and mirroring,” said Badani. “We’re taking all of that, that Tectonic had, and converging that into OpenShift into a converged platform, over the next six months.”
The Operator Takes Over
The operator technology remains under active development, as former CoreOS CTO — now Red Hat engineer — Brandon Philips said. This will enable OpenShift, going forward, to access components that may be orchestrated by Kubernetes but not directly managed by it, and the difference is less-than-subtle once Philips explains what’s going on.
“Kubernetes is great, because you can deploy anything on top of it,” said Philips. “But the abstraction is like [with] a VM: Who knows what’s inside that container? We’re trying to click it up to another level of abstraction. You get the visibility — it’s not just like, ‘Oh, here’s a pod, and it happens to contain etcd.’ It’s, ‘Here’s an etcd resource like an API, and here are the pods related to that resource.’”
With Amazon’s Relational Database Service (RDS), as one example, a user asking the system to spin up a Postgres database will trigger the instantiation of a VM behind the scenes, noted Philips. The Postgres database’s functionality becomes exposed through RDS’ native API. Operators will work similarly through OpenShift, enabling the functionality from the native API of the outside component to be accessible through Kubernetes’ control functions, including the command line.
“If you look at it, Ansible’s interaction with OpenShift is one of hundreds or thousands of things that Ansible does really well for automating,” remarked Red Hat’s Gracely. “In many cases, it’s automating the underlying network, it’s automating storage. And we’ve integrated through the Service Broker the ability to call any Ansible playbook. So by no means does Ansible go away in the overall story. There’s just places where Ansible wasn’t the most efficient technology to use in the pure sense of Kubernetes.”
With Kubernetes new deployment model, Gracely added, the ability to spin up new nodes for itself — which the orchestrator originally lacked — has been added natively. So some of the functions Ansible had been providing through OpenShift had already become redundant anyway. Thus Ansible will be stepping aside specifically for functions that either the updated Kubernetes now performs better on its own, or that the Operator Framework can provide more directly. Conceivably, a revised Ansible playbook could trigger operators on a higher level, although the means for doing so may have yet to be determined.
Alter Ego
Some of Tectonic’s more operator-centric personality (the personal kind of operator, as in “DevOps”) will reveal itself in forthcoming editions of the OpenShift console, as dashboard and console elements tailored to the role of its user — specifically, whether she’s a developer or an operator.
“At the most basic level, OpenShift had primarily focused the things you would see in the platform on applications-centric things,” said Red Hat’s Gracely. “The visibility was much more about the projects and applications you were working on, how you would deploy them, how you would connect services together, and so forth. And the things that were more operator-centric — managing the platform, managing clusters — were deferred to other tools, like CloudForms and some others. But they weren’t native, first-class parts of OpenShift.”
That seemed okay at first, he said, since operators (the human ones) tend to choose their own tools for themselves. Meanwhile, Tectonic was making inroads with Ops, and it appeared they were choosing Tectonic more and more for cluster management and for highly-customized deployments.
“What we found — and essentially we found this after the acquisition,” Gracely remarked, “was, we would go talk to CoreOS and OpenShift customers, and there was always a sense of, ‘I wish we had what the other one had.’ So one of the very first things we did was say, okay, instead of forcing people to use a bunch of external tools, let’s take what CoreOS had in Tectonic — very cluster-level, operations-level capabilities — and embed those into OpenShift.”
Going forward, when you log in as a developer, OpenShift will look fairly familiar. But as a cluster administrator, you should expect to see the data being pumped out by Prometheus, and the latest events being generated for logs. “Our belief,” he said, “is that we’re going to bring together the best viewpoint from an operator’s perspective, and the best viewpoint from a developer’s perspective, all under one set of tools.”
Splitting the Atom
The container-based Linux that is emerging from the combination of CoreOS — the firm that arguably popularized the concept — and the Project Atomic company, will be called Red Hat CoreOS. This is where one of the brands that built the containerization industry will live on. But there was some concern from the moment the acquisition was announced about whose code would remain in the kernel. It’s not just a performance issue, but one that traverses the issues of automation and compliance as well.
“We’re introducing a new Linux called Red Hat CoreOS which is similar to CoreOS Container Linux,” stated Brandon Philips. “It has automated operations all the way down to the foundation. So when you’re running with OpenShift and Red Hat CoreOS, we’ll essentially manage the upgrades all the way to the OS for you, whether you’re on-prem or on-cloud.”
The keyword in Philips’ response was “similar.” At its core, it wasn’t really feasible for Container Linux to be adopted entirely, stated Brian Gracely.
“The capabilities of the original CoreOS were built on a Gentoo kernel,” he told us, “which had come from the ChromeOS technology. The more important part of what they built was a very container-centric, very super-small, footprint with the over-the-air, automated updates. That technology was really groundbreaking.”
Red Hat’s Atomic team effectively slimmed down their kernel’s own footprint, and adopted the CoreOS methodology of integrating added-on functionality through additional containers, Gracely added. But Atomic was built on the Fedora/RHEL kernel, which ISVs have already certified for use with their applications. Red Hat simply could not swap out one OS kernel for another, and have their certifications mean the same thing. It would be like signing a treaty, declaring the treaty invalid, and then asserting the treaty could be renegotiated with all the other parties. In other words, ridiculous.
Atomic enables organizations, said Gracely, to move their existing Linux applications to the containerized model without having to re-architect them or re-certify them. “So what we’re doing with Red Hat CoreOS,” he went on, “is taking the kernel from Atomic — which gives us immediate compatibility for everybody who’s ever been a Red Hat ISV — and embedding all of the CoreOS over-the-air management updates. We’re taking the operational model that’s built around CoreOS, and the compatibility of the kernel with Atomic — and that’s what you’ll see in Red Hat CoreOS going forward. Our ISV partners, and our customers who have certified against that, don’t have to worry about recertification.”
Although the key bounty in Red Hat’s acquisition of CoreOS may have been its talented engineers, as I’ve said before, easily the second item on Red Hat’s list was its automation. Attendees of Red Hat Summit were alerted to expect the first CoreOS-merged releases of OpenShift this July.
Red Hat is a sponsor of The New Stack.