Cloud Services / Kubernetes

Red Hat OpenShift Part 2: From Cartridges to Kubernetes

27 Aug 2015 10:44am, by

In part one of our profile about OpenShift, we looked at how the platform has adopted Docker. In part two, we explore the role Kubernetes plays and what that means for the OpenShift platform.

The new version 3 of OpenShift is Red Hat’s implementation of Docker. It is not a Docker alternative, nor is it a Docker “flavor;” it is Docker. Whatever value-add that the new OpenShift contributes to the system is — even in Red Hat’s explanations and demos to developers — sprinkled on top.

As we saw in part one of this story, under OpenShift’s new system, Docker images (and later presumably OCI images, since Red Hat is a founding member) replace cartridges as OpenShift’s clustering mechanism. The Kubernetes master replaces the OpenShift broker, and Docker containers replace OpenShift gears. Containers that need to share the same file system are grouped together, as Kubernetes would have it, in pods.

CoreOS’ etcd takes the place of MongoDB, as the provider of the key/value store for service discovery. And Red Hat Enterprise Linux 6 is replaced by RHEL 7, accompanied by Atomic, Red Hat’s minimalized edition of RHEL for running inside containers.

Of course, all this means that Docker’s notion of container networking completely supersedes the system OpenShift had developed for routing nodes.

Replacing the Transmission

As Red Hat middleware specialist Veer Muchandi explained in a video produced last April, some of the kinds of features you used to find in an OpenShift cartridge will now be published as “containerized services” — for instance, JBoss xPaaS, representing Red Hat’s branded middleware.

“You also get a Docker Hub where your Docker images would reside [and] would be registered,” said Muchandi, “and a marketplace through which other vendors can provide you the Docker images to use. Cartridges are going to be replaced with those Docker images.”

At the top of the new stack (to coin a phrase), the “user-experience layer” will demonstrate OpenShift’s robustness to developers, he continued, by means of a responsive command line interface and alternate web console. These tools will connect with Kubernetes master by means of a RESTful API. OpenShift v3 should offer what Muchandi described as “better services with a better developer experience.”

Joe Fernandes, Red Hat’s director of product management, emphatically declares for The New Stack that OpenShift v3 will run the same code as v2. And while the deployment and distribution mechanisms have been replaced, all that really changes are the steps developers and admins take to deploy code. Even those steps will be analogous to one another.

Version 3 includes a feature called “Source-to-Image,” which Fernandes says performs the same basic functions as cartridges in v2, or buildpacks in Heroku. “A developer just pushes code via Git, and then the platform takes that code and identifies what’s needed to run it — if it’s Java code, maybe it needs JBoss or Tomcat. Then it combines those things, builds a new image, and then runs it on the platform. Whereas the previous iteration of OpenShift did that with our open container model, and Heroku does it with their ‘Droplets’ and ‘Dynos’ and stuff, we now have that model for Docker.”

In a way, this model supersedes Docker’s built-in deployment model for its own containers, making it actually more like OpenShift v2 — but only if developers choose to use it that way. “The idea is to just let developers do what they do,” says Fernandes.

“If they push to a source code repository, and then build, we’ll just watch that repository and, when we see code, we’ll grab it, build it, and then deploy something.”

“I think it’s a huge deal for developers, because it allows them to move from essentially what I consider to be a toy, which is just playing with Docker on your laptop, into something that you can actually run at scale.”

Up Scale

What distinguished the old OpenShift from Cloud Foundry was its unique, simply-defined mechanical approach to representing the components of a distributed application. What will distinguish OpenShift version 3 from its similarly updated competition, according to Red Hat, will be its approach to differentiating the standardized, not so unique, certainly not-as-simple approach to distributed functionality.

Throughout the course of this shift to OpenShift, there is a message that Red Hat needs to make, but would perhaps prefer not be made so clearly: Programs distributed on the new PaaS probably need to be designed differently from those for the old PaaS. Red Hat is reluctant to state outright that applications for OpenShift should change, probably for fear of forsaking downward compatibility. Compatibility is not at issue here, but methodology certainly is.

“We have found that by decomposing the application into more components (such as container, services, routes, builds, secrets, storage, etc.) we are able to offer users more flexibility and the ability to create better applications,” writes Red Hat product manager Mike Barrett in a post for the OpenShift blog last June.

What you can read into that: applications designed for OpenShift v3 will follow a different architectural metaphor from those designed for v2. A completely different mechanism is now responsible for managing container images, the replication of those containers across the system, the deployment of individual containers, and the lifecycle management of containers once they’ve started.

You see, Kubernetes is a network manager, specifically with regard to how containers network with each other. If the applications stored within gears or containers never communicated with one another, they’d never have to change. But the whole point of leveraging Kubernetes in the first place is to enable a completely distributed architecture that wasn’t really possible with the old system. There is a new and fully transparent workflow, which Kubernetes makes available to applications by means of environment variables. OpenShift v3 apps can see what’s going on behind the scenes, and make adjustments and adaptations when necessary — adaptations that may not have been feasible in v2.

As I wrote in part one of our profile about OpenShift:

Put in a more metaphorical manner, the whole idea of being a passenger in a Ford Thunderbird changes radically from the ’57 model to the ’61 model, with the addition of a back seat. You don’t have to go back to driving school when you exchange your two-seater automobile for a four-seater, and you don’t have to learn how to develop all over again when you move from OpenShift v2 to v3. But once you take on extra passengers, you may find you’re driving places and making stops you wouldn’t have before.

But here’s the risk Red Hat takes in making this bold a shift to its PaaS architecture: Old platforms never die. (Just look at how many major retailers still use Windows 2000.) The architectural mindset shift between v2 and v3 of OpenShift is so great that it’s unlikely everyone will make it. By some analysts’ estimates, OpenShift has a slight market share lead over Cloud Foundry (which has made a similar shift to Docker and Kubernetes). If that’s truly the case, Red Hat is moving away from an architectural direction that had already attained significant momentum.

Organizations don’t just turn left when their vendors tell them, “Turn left.” We can expect some of the pre-existing OpenShift customer base to stick with version 2. Just how brilliant this new metaphor for distributed computing truly is, may perhaps be measured in two years’ time, once we’ve taken measure of how many developers are still busy packaging their gears in cartridges and pushing them to brokers.

CoreOS, Docker and Red Hat are sponsors of The New Stack.

Feature image: “The Falkirk wheel, Falkirk, Scotland, United Kingdom” by Giuseppe Milo is licensed under CC BY 2.0.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.