Red Hat: ‘The Race to Own the Container Market is On’
Red Hat’s OpenShift platform-as-a-service (PaaS) software, is now officially a containerization platform, Red Hat president for products and technologies Paul Cormier said during a press conference Tuesday at Red Hat Summit, held this week in San Francisco. And every company that produces a containerization platform is now, by extension, a Red Hat competitor, including Docker Inc., by its pioneering stake in the container industry; and Pivotal, for its leadership position in Cloud Foundry.
“Containers are a Linux operating system,” said Cormier, characterizing other companies in the container space as “custom Linux providers.”
Showing a slide containing the logos of Docker and CoreOS, along with HPE, Microsoft Azure, Amazon AWS, and Google Cloud, Cormier acknowledged that all of these companies are invested in helping enterprises build hybrid environments. “But in order for all of these to do it, they literally have to do a custom Linux distribution,” he said.
“And that’s one of the interesting things right now, is, if we’re not careful, we could have many of these custom Linux distributions flying around. We’re taking a very different tact. Where containers are Linux — they are Linux — you have to be a commercial Linux vendor in order to do a container distribution, we’re building on what we’ve built upon with RHEL for fourteen-ish years.”
At one point during Cormier’s statement, a slide popped up with this statement in one corner: “The race to own the container market is on.”
OpenShift Closes Ranks
Red Hat’s announcement Tuesday was not so much a technology or product release as it was a new marketing and distribution stance for OpenShift, the company’s PaaS platform — although the company did announce a few important, incremental additions to fill some platform gaps. At any rate, Red Hat now wants you to consider OpenShift as a direct competitor with Docker, as well as an emerging container platform space in which Cormier also tosses Cloud Foundry.
Last week at DockerCon in Seattle, Docker made its platform consolidation move, by tying the Docker Swarm orchestrator more closely to Docker Engine 1.12. Red Hat’s response Tuesday fills a few platform gaps on OpenShift’s side as well. Most importantly, the company is tooling Gluster Storage — its multi-tenant, software-defined storage array — to serve in the capacity of a persistent storage container.
This way, databases and data stores that maintain the state of applications (the most obvious example being, Web sites) can be relied upon for consistency, even as microservice containers are phased out and applications scaled down. Last year, Docker began partnering with database service providers such as ClusterHQ (note the spelling, with a “C,” not a “G”) to enable persistent storage using Docker’s emerging plug-ins model.
Red Hat made it very clear today that Gluster Storage containers would require no such model. More to the point, Gluster would enable storage admins and developers, according to Red Hat’s press release, “to control both storage and application containers using a single control plane with Kubernetes in Red Hat OpenShift Container Platform.”
“We’re using Kubernetes Host Networking (Docker Host Networking) for our container-converged solution,” Red Hat chief big data architect Stephen Watt told The New Stack. “This means that the storage containers that are part of this solution get the same IP address of the host they are running on, and not an IP from the container overlay network (which in our case is the OpenShift SDN). We’ve found that this is the most performant networking configuration, and there is no impedance preventing the application containers on the SDN from communicating with the volumes provided from the storage containers. No Docker plug-in is required. This is a standard networking feature available in Docker and Kubernetes.”
Watt’s comment makes it clear that OpenShift is not reinventing, or even updating, the proverbial wheel here — that it’s utilizing the same Kubernetes orchestration architecture that’s available with the Docker platform, only without the reliance upon Docker Engine.
“Storage is completely ripe to be commoditized by open source,” Cormier said, “just as we commoditized the compute layer. But our goal with storage was to really concentrate on the open hybrid cloud use case. Bringing storage now, integrated via containers, into OpenShift, really allows our customers to now build storage as that critical service into their platforms, and manage it all the way from development in the lab, dev, and test [phases] all the way out into the production environment. It’ll give us unified orchestration using Kubernetes for application storage; greater control and ease of use for developers; and because of this convergence, will lower the overall TCO [total cost of ownership] with single-vendor support.”
That last volley was fired across the bow of a certain unnamed competitor, whose persistent storage option was forged by way of partnership.
The Security Salvo
In the security department, the OpenShift Container Platform formally rolls out container scanning capability, forged from the partnership Red Hat struck with Black Duck late last year. Specifically, content scanning will become integrated with RHEL 7 Atomic Host, which was originally released last year. This addition makes OpenShift now more evenly matched against Docker, whose Docker Security Scanning feature was introduced last October.
“We’re now giving the ability for partners to connect in, and scan the containers for security vulnerabilities,” Cormier said. “As you move to the enterprise, you need things like enterprise-grade security, you need tools, and you need partners. And this is now the area where we’ll bring partners in, to add more value to our container platform, as customers really start to bring this across their enterprise and use it in real production.”
During the Q&A session Tuesday, Red Hat executives were asked to distinguish their Black Duck-powered offering from Docker’s and CoreOS’ container scanning. Red Hat General Manager for RHEL and Containers Lars Herrmann jumped in with a response: “Container scanning comes down to two things. There’s the mechanics of executing a scan — that’s what we enable with an open framework, where lots of solutions can plug in. But then the actual insight is generated by comparing the data to a back-end data source. Black Duck has a very comprehensive universe of insights about all kinds of technologies. We are driving SCAP [Security Content Automation Protocol, pronounced “ess · cahp”] which drives a security perspective on technologies we manage.
“At the end of the day,” Herrmann continued, “we have to get to a very holistic set of integrations with lots of these different data sets, so the customer can drive the right conclusions, solve problems, and also make policy-based decisions. So to my knowledge, nobody else has such a comprehensive view for managing insights-driven container scanning, and tying it into the workflows, as Red Hat does.”
Cormier recalled last February’s discovery of a vulnerability in the standard C library glibc a “gaping security hole” in containers that included it. “It was in virtually every container on the planet,” said Cormier. “That now had to be identified, pulled back, recompiled, and redeployed.”
Cormier’s implication was that any vulnerability in a commonly used library in the Linux user space could make nearly every container an open target. A scanning tool would be the container platform’s first line of defense, in such a situation. But it may not be a long-term solution. Displaying what appeared to be a continuing sense of frustration over how containerization has disrupted the neat, tidy organization of the Linux kernel, Cormier explained to one questioner that containers effectively separated Linux along the tear line between the kernel and the user space. Now the kernel plays the role of container host, perched atop the hardware in a management role, he said; whereas the user space “now gets chunked up” and widely dispersed among various containers.
Thus any single security exploit could see its attack surface magnified to planetary scale, as the company president described it, echoing a Red Hat argument about untrusted container images that dates back a few years, and that crops up every now and again.
The Platform Play
Red Hat is clearly making an effort to peel the Docker community away from Docker, the company, as well as to peel the Cloud Foundry community away from Pivotal, and to separate the Kubernetes “project” from the Kubernetes “product.” Paul Cormier explained more than once Tuesday that, in the case of all three projects, the community should not be mistaken for the commercial provider.
“You have Cloud Foundry, the community, which is controlled by mostly one company,” Cormier told one questioner, “so it’s not exactly a community.
“You also have IBM’s version of Cloud Foundry; you have Pivotal’s version of Cloud Foundry; you have HP’s version of Cloud Foundry,” he continued. “So now you have IBM Linux, Pivotal Linux, and HP Linux all driving those Cloud Foundries underneath, which means you have IBM containers, Pivotal containers, Linux containers, and HP… want me to keep going?”
When Docker Inc. presented its container format to the OCI last year, no doubt, it had to have been aware it was replicating the keys of its kingdom for anybody who’d like to build another one. There is no longer just one ecosystem in the containerization space.
The Cloud Foundry, Docker, CoreOS, HPE, IBM, Red Hat are sponsors of The New Stack.
Title image of the demolition of the Gay Street Bridge in Phoenixville, Pennsylvania, by J Clear, licensed under Creative Commons.