Red Hat OpenShift Online Now Supports Docker, Kubernetes

Since Red Hat settled on its “gears” orchestration model for applications a few years back, the enterprise has, to some extent, gravitated to the Kubernetes model championed by Google and facilitated by Docker,
Last year’s release of Red Hat’s OpenShift 3, the company’s Platform-as-a-Service software, addressed these preferences, adding support for Docker. And since that time, Red Hat moved an important step further, with the integration of .NET Core and JBoss Fuse Enterprise Service Bus, into the company’s OpenShift Enterprise 3.1 and OpenShift Dedicated 3.1 platforms.
So OpenShift Online — the all-public option that competes with the likes of Heroku and Salesforce — has had quite a bit of catching up to do.
Thursday, Red Hat takes a big — and necessary — step in that direction with the launch of a developer preview of OpenShift Online 3, bringing the public PaaS more in-line with version 3.0 of OpenShift for managed data centers and private deployments.
“What we have done here is, you have the ability within one virtual machine to have multiple customers run their containers in isolation,” said Sathish Balakrishnan, who directs OpenShift Online for Red Hat, in an interview with The New Stack. This isolation takes place, he told us, on a per-project basis — meaning, each customer project is staged in a Kubernetes pod, using a unique network overlay that is isolated from other projects on a multi-tenant SDN.
This way, customers can run multiple projects simultaneously, though Red Hat will maintain the distinctions between them as though they were operating in separate data centers.
Now You See It, Now You Don’t
While version 3 will adopt Docker containerization on the back end, the truth is, Red Hat’s customers won’t be dealing directly with the Docker Engine or, for the most part, with the Kubernetes orchestrator. That’s because this platform will be handling source-to-image (S2I) creation of dockerfiles, and the containers built using dockerfiles, in the background. Integration with the customer’s existing IDE (presumably, Eclipse) will make OpenShift Online appear to be a seamlessly connected component.
That makes OpenShift Online 3 somewhat different, Balakrishnan said, from using Google Container Engine — which, he argues, gives customers the full Docker and Kubernetes experience in PaaS. He believes that, by contrast, OpenShift can mask the implementation details of Docker from the particular class of customers who can benefit from a full, public PaaS, through features such a declarative deployment model.
That said, each application in the development stage runs in an isolated environment with 2 GB of memory, prior to deployment to production, the product director told us. Within that limited space, he admitted, a customer won’t be able — or need to be able — to run clusters or designate projects to particular VMs that are hosting OpenShift. (Each instance of OpenShift Online is a virtual machine.) But a customer can designate the number of container instances to be run simultaneously.
Unless, of Course, You Still Do
Since this implementation of OpenShift is being offered now as a developer preview, but the details of that implementation — such as managing Docker — are hidden from the developer by design, we asked Balakrishnan, what does Red Hat expect developer customers to provide to the company that it can use in bringing the final version of the platform to general availability?
“We want to see the use cases that developers are going to throw at it,” he responded. “Is it secure enough for us to put it out there as GA? People have brought a lot of things into security, for both Docker and Kubernetes, so we want to see how they work. The other thing is, how does [OpenShift] work under load? When we have 50 developers pounding containers within the same virtual machine, how does that work?”
And indeed, since some of the implementation details are supposed to be kept outside the eyes of developers, Red Hat wants to know whether developers would be troubled by those details anyway, in the platform’s current state. “This is useful not just for OpenShift Online GA,” he added, “but also because we’re trying to introduce a new model, and we will have security learning that we can apply back to OpenShift Enterprise and OpenShift Dedicated. That’s the value proposition we have: It’s a multi-cloud, multi-consumption model.”
Did Convergence Just Happen?
It has already been nearly two-and-a-half years since Red Hat’s Gordon Haff famously speculated about the possibility of the IaaS and PaaS service classes effectively merging — about the boundaries between OpenShift and OpenStack, for example, being blurred or even completely erased. Does Red Hat’s implementation of Kubernetes as the orchestrator for OpenShift bring us all closer to that point of convergence that Haff didn’t exactly predict, but within reason, estimated?
“I think that evolution is always happening,” Balakrishnan said. “PaaS is now mutating into containers-as-a-service. It is probably the best time to be in the infrastructure industry, because things are so dynamic, and more things have changed over the last four years than in the last forty years.
“There are still things that OpenStack provides value in — for example, if somebody wants to have a huge VM workload that they can’t containerize, or they’re running packaged software like Siebel and they want to manage it using the cloud, or they just want to provide VMs to people,” Balakrishnan continued. “What if they want a block storage solution; or what if they want to run it within their premises, and they want to manage container workloads and non-container workloads? Different things solve different problems; what we’re trying to do is make OpenShift Online make developers’ lives easier, and remove any barriers they may have to use and deploy a Docker-based container platform.”
Docker and Red Hat are sponsors of The New Stack.
Feature image: A restored Santa Fe “Warbonnet” passenger train engine at the Galveston Railroad Museum, by Scott Fulton.