Remember the 1990s, when Docker was just a brand of dorky khaki pants?
These days, with containerization and distributed computing terraforming the terrain on Planet IT, Docker now refers to the leading brand of containerization software. In case you’ve been living under a rock, containerization provides a way to break complex applications into individual point services: Each segregated service, or standalone section of software, is wrapped each into its own “container” along with all its dependencies. Now it can run anywhere, in any environment; consistency is guaranteed. Updating a containerized application is vastly easier, and scalability is practically built in.
But — there’s always a ‘but’, and we don’t mean the kind in Dockers™ pants — when you run this stitched-back-together Frankenstein app, how do you ensure all your containers come up at once, and are able to find and talk to each other? It turns out this doesn’t always happen seamlessly, alas. So, Kubernetes to the rescue! Kubernetes, an open source container management system developed by Google, works as an “orchestrator” to coordinate related components distributed across a varied array of infrastructures.
Kubernetes, like any system, also has imperfections… and there’s also the fact that we are an industry of die-hard tinkerers who can never resist making things better, stronger, and faster. Despite the fact that these technologies are so recent that they still have that new-app smell, companies like Univa Corporation are coming up with ways to work a few kinks out of Kubernetes even as they create value-added containerization tools that give a serious power up to the enterprise sector’s utility belt.
Today’s conversation is with Rob Lalonde, Univa vice president and general manager of Navops — Univa’s newly launched Navops Command product. Navops Command brings enterprise-grade policy management and scheduling to Kubernetes. Navops’ proven workload placement and policy management solution plugs into any Kubernetes distribution and provides the unique ability to maximize utilization of shared resources and manage microservice applications while responding to end-user demand.
The Navops suite consists of Launch, a free, pre-configured package that allows users to install Docker and Kubernetes with one click, and then rapidly provision and deploy their container-ready infrastructure. Navops Command is the heart of the suite, providing automated scheduling combined with intelligent policy controls in Kubernetes-based container clusters to improve velocity and efficiency. And Navops Control collects and analyzes data from clusters and schedulers to document and demonstrate how it’s all working.
How do you see your customers — or even potential customers — using containers? In production? Still in testing?
It varies by customer and project, but the container market and Kubernetes are absolutely booming. There are over 1,000 contributors now to Kubernetes’ open source ecosystem — it’s a diverse community that’s growing fast, adopted not only by an impressive list of vendors but also across numerous enterprises. Containerization is one of the fastest growing technologies ever, and Kubernetes is right there with that. So we are seeing rapid onboarding across the ecosystem.
Univa got its start in the supercomputing field: What can supercomputing offer to the distributed computing container/Kubernetes community?
We don’t think of it as supercomputing as much as high-performance computing or technical computing. Supercomputing refers to giant computers like Crays, while we work more with off the shelf commodity servers. Our core product is Grid Engine, which is all about distributing, managing and scheduling workloads across large grids of computers. Everyday off-the-shelf commodity servers, harnessed together, can be extremely powerful.
The original human genome sequencing, which was done manually, cost close to $1 billion dollars and took 13 years. Now our customers can do it with Grid Engine in a matter of days. Oil and gas exploration for energy companies, molecular modeling for pharmaceutical companies, fraud detection for financial services companies — all these require Big Compute. Grid Engine enables users to distribute workloads across hundreds if not thousands of computers to leverage existing infrastructure, in-house clusters running in big data centers to distribute the workload. We have, for example, customers who run 5 million jobs per day.
As the container market was emerging, we realized it was going to have the same complexity for managing workloads. And those containerized applications would require advance scheduling and management policies — for how and when and where to run workloads.
So Univa created Navops Command, which is all about advanced policy management and scheduling for Kubernetes. We sit on top of Kubernetes and provide an enterprise-grade scheduler so that same capability we bring to the big enterprise grids, we now bring to enterprise containers.
What shortages does Kubernetes have that Navops can address?
Kubernetes is a great solution. What it doesn’t provide, however, is a great way for corporations to share, and balance computing resources across multiple projects and teams. Navops Command builds virtual multi-tenancy and advanced resource management into Kubernetes.
Virtual multi-tenancy? Sounds kind of like Airbnb for computing.
Cool analogy! Virtual multi-tenancy is resource sharing that allows applications to get what they need at the right time while still sharing the cluster with other groups and applications. Just like Airbnb allows you to share living space between different tenants at different times as the need arises. Navops Command allows you to share a Kubernetes cluster across projects, teams and applications. Navops Command ensures those apps get the resources they need when they need it. Also, sharing allows applications to get more resources when they need more.
How do I set up and use Navops?
Navops runs as a pod inside Kubernetes — a pod is a unit of work in Kubernetes. So it runs as a pod and replaces the Kubernetes scheduler with this more robust enterprise-grade scheduler. Simple to install and manage: the user gets a web UI or a command line, or even a REST API; from there they set up their projects, their users, and their applications, then set the system up to run the workloads on behalf of those projects and users. We can do things like restrict memory or CPU, and we have something called Proportional Shares that allows you to determine how much of your resources are made available to a certain user or group. So Group A gets 10 percent, Group B gets 20 percent — but if Group A needs more resources and Group B currently isn’t using their share, Group A can burst into it.
The more diverse your workloads, the more different applications are going to require resources at different times, so your resource allocation balances out over time. It’s an organic process based on workloads varying between different teams and different projects at different times.
Once Navops is set up, using it is simple. Jobs get submitted to the system, and the system just runs. If your priorities change, it’s very simple to re-allocate resources between projects or users.
Can you talk about how Navops is being used, or being tested, by customers?
We just came out in general release last week, so customers are onboarding now. But prior to Navops Command we were seeing customers building multiple clusters for each application area. This means a lot of additional complexity — and users of those clusters did not have ability to go beyond the size of their cluster. With Navops they’ve got more resources overall, and a much larger pool to draw from. The work gets done when it needs to get done, so say you’re a retailer whose front end is handling transactions. At night you’re busy with far East customers while your US side is quiet, then during the day you’re running U.S, transactions while Asia is asleep.
Things are going to run better and overall you’re going to save money. You don’t need to provision as much hardware in-house — because you’re sharing, you’ll spend less money on computer resources in-house, as well as fewer resources in the cloud. And of course, whenever you can turn off cloud resources, you don’t get billed for them.
What components of Univa’s Grid Engine product capabilities does Navops utilize, and why?
Navops Command includes the Grid Engine scheduler — so the same one being used by customers on the largest clusters in the world is the same one used in Navops Command. We’ve got this very rich policy-based Grid Engine, with 15 years of development and many millions of lines of code, and now we are exposing that richness to the Kubernetes world via Navops Command.
So I understand that Navops Command can also do “mixed workloads”? What does that mean for the customer?
Yes. Virtual Multi-tenancy and Mixed Workloads are two of the primary benefits to companies using Command. Navops Command provides mixed workload support so that companies can run containerized and non-container workloads on the same Kubernetes cluster. So non-container, batch workloads can run under the Kubernetes umbrella along with the microservices applications….this takes the concept of sharing to another level! This blending of workloads allows companies to progress along the path of adopting microservices and Kubernetes without having to transition all of their applications at once.
How much does Navops cost?
We haven’t published our price at this time — we do sell on a per node basis, or per computer. The Navops Command suite is available for free trial download.
Univa sponsored this story.