Red Hat OpenShift Presses Outward to the Edge, Enhances Developer Experience
Honeycomb sponsored The New Stack’s coverage of KubeCon+CloudNativeCon North America 2020.
While Red Hat officially launched OpenShift 4.6 in late October, the company has introduced a number of new features around its managed Kubernetes offering just in time for this week’s KubeCon + CloudNativeCon North America, including updates around serverless, its Quarkus Kubernetes native Java stack, long-term support, and the ability to run remote workloads without requiring Kubernetes.
In addition to all of this, explained Brian Gracely, a senior director of product strategy at Red Hat, the company has expanded where and on what hardware its users can run OpenShift.
“We’ve always said OpenShift is a platform that you can run anywhere, private cloud, public cloud, but even within those two buckets there’s a lot of sub buckets,” said Gracely, explaining that OpenShift also now includes support for IBM Power and Z platforms, as well as both AWS and Azure Government clouds.
Red Hat will also be introducing extended lifecycle support for OpenShift, extending the range from nine months to 18 months, helping to slow down the pace for companies having difficulty keeping up with the Kubernetes release cycle, which puts out a new version every three months. “We’ve been hearing from more and more customers that say, ‘We appreciate all the innovation but keeping up with it is sometimes challenging,'” said Gracely.
In terms of serverless, users can now select from more than 100 event sources with the addition of Apache Camel-K, which brings new event sources such as AWS SQS, AWS Kinesis, Salesforce, Slack, Telegram and others.
Red Hat has also made Quarkus available to all OpenShift customers, and Gracely said that the framework, which has been available for just over a year now, is really beginning to pay dividends.
“The problem with Java is it tends to be fairly heavyweight. There was always a little bit of a mismatch between things like Java EE, and to a certain extent Spring Boot: putting it in a container, you tended to have very heavy containers. Quarkus was really targeted at reducing the memory footprint, the speed at which it boots and so forth, and trying to get it so that Java applications look more like your other container applications,” said Gracely. “They’re seeing far less memory usage, startup times to be almost immediate. An equivalency, I would say, is like when people first started using server virtualization, and they were like, ‘Oh, now that I understand what this does, I just immediately save a bunch of money.’ That sort of the same thing we see with this.”
Meanwhile, Red Hat will continue to push its focus on operating Kubernetes on the edge, as well as looking past Kubernetes to potentially simpler alternatives. After the company launched the ability to run a three-node OpenShift at the edge, said Gracely, customers began asking if this could be pushed even smaller to one node, and he said they are looking at the possibility. Beyond this, however, OpenShift 4.6 includes the ability to launch remote worker nodes, which extend processing power to space-constrained environments and make it possible to scale remotely while maintaining centralized operations and management.
“Usually with Kubernetes, you would deploy the control plane, the master nodes, and the worker nodes in the same location. We started seeing a bunch of use cases where people are saying it’s either cost-prohibitive to deploy both of them or, ‘I have a limited footprint available out there, can I just put the workers out in a remote location?’ so we’ve allowed people to extend out those edges to be just worker nodes, just application nodes,” explained Gracely.
In essence, this allows OpenShift Kubernetes to treat remote nodes as part of the same cluster, which Gracely said was a matter of tweaking to make sure that, when the response time was longer than the normally sensitive response expected of a local resource, Kubernetes wouldn’t unschedule that remote resource.
Finally, Gracely said that the company has recently addressed the idea of running containerized workloads at the edge with a combination of Red Hat OpenShift or Red Hat Enterprise Linux and Podman, depending on the application demands of the edge use-case.
“How many containers are you going to run out there? Do you really need Kubernetes? The reason that conversation comes up is, you start asking questions, like ‘How often are you going to go touch that edge device?’ Kubernetes has this frequency of update that’s pretty quick, it’s faster than you’re probably used to. Is that really something that you want to deal with?” said Gracely. “What if just the native operating system, essentially RHEL, did the things you wanted, it was natively built in secure, and it was designed more for those single node, or maybe even two, node environments?”
Moving forward, said Gracely, OpenShift would continue its expansion, both into the cloud, as well as into abstracting Kubernetes away and improving the developer experience.
“I think at this point, we’ve hit every single possible cloud that OpenShift can run on, although we could probably expand out into Alibaba and some others in the future, but what you’ll see in the future is going to be continuing to make it simpler and simpler to build applications on Openshift. Less about Kubernetes features and more about hiding YAML, making it simpler to define microservices, making it easier to run different languages and so forth. That’s where the next big shift will be: doubling down on simplifying the developer experience for a broader set of applications,” said Gracely.
KubeCon+CloudNativeCon and Red Hat are sponsors of The New Stack.