ContainerX Will Offer a Container Management Platform for the Enterprise

Startup ContainerX is fervently working toward to make its enterprise-focused container management platform generally available by early June.
The San Jose-based company came out of stealth at DockerCon Barcelona in November and has since been releasing a beta a month.
The company aims to differentiate itself in the ever-more-crowded container-management segment with two pieces of IP: Elastic Clusters and Container Pools, which combined, can allocate infrastructure to container clusters based on the pre-defined priority level and the current utilization pattern of resources.
“We would be humbled if people thought of us as vSphere for containers,” said company CEO Kiran Kamity. “We want to enable any virtual machine administrator to be a container administrator with minimal training. So we’re building a ready-to-go, all inclusive infrastructure stack that’s designed for enterprise IT where dev and ops can self-service.”
He says that’s what vSphere and Hyper-V have been doing in the world of virtual machines, but adds, “a platform like that is missing from the world of containers.”
The company draws on the leadership experience with VMware, Microsoft and Citrix and has raised $2.7 million in seed funding from former VMware Chief Technology Officer Steve Herrod at General Catalyst Partners, Greylock and Caffeinated Capital.
https://youtu.be/N_4WYd_9MKs
CEO Kiran Kamity’s first company, RingCube, launched in 2006, built Windows containers, prompting ContainerX’s focus on making Windows containers a first-class citizen. Citrix bought that company in 2011. Other founding members worked on Distributed Resource Scheduling (DRS) feature of vSphere at VMware, which drives better utilization of available resources by evenly distributing the workloads across the underlying infrastructure.
It’s focusing its efforts on meeting two sets of demands in the enterprise market: those of developers and company executives.
Developers might build departmental applications in a container, then go to IT looking for a place to run them, but IT isn’t yet equipped to provide that place, he says.
Meanwhile, the “visionary IT exec” realizes he can reduce maintenance costs by reducing the number of VMs, Kamity said.
“It takes roughly 16 man-hours per year per VM for patching, updating the OS, antivirus, etc. And this is for infrastructures that are reasonably automated. If they’re not automated, it probably takes longer. Multiply that by the number of VMs in your environment, that’s a lot of OPEX,” Kamity says.
With Elastic Clusters and Container Pools, you can take a cluster of compute and divide resources into logical pools: Finance gets 10 percent, CRM gets 25 percent, and so on.
You can over-commit these resources beyond 100 percent because you know that in any given point in time data center resources are really under-utilized, he says.
From the security and isolation perspective, these pools not only have CPU isolation limits, they have their own virtual network; they have their own LDAP (Lightweight Directory Access Protocol) authentication, meaning developer authentication on a particular team or application is at the pool level.
With pools, IT can create a multi-tenant environment that can be shared among multiple teams while making better use of resources, he says.
From the same pane of glass, you can set up a cluster with bare metal, with on-prem VMs or on a public cloud.
ContainerX will use Docker Swarm for orchestration in version 1.0, but will add Mesos later. Kamity said he would like to add Kubernetes by version 3.0. Similarly, it uses the Docker runtime, but would add CoreOS’s rkt if the demand were there. Choice is a major tenant of its platform, though, with limited resources, the company has had to focus on Docker for the 1.0 release, Kamity says.
How it Works
There’s an agent container that sits on each host and the agent has compute, storage and network. The agent has the ability to do the multi-tenancy and divide the cluster of compute into pools, etc. ContainerX provides a virtual overlay network between pools rather than at the cluster level and for storage, it works with anything that has NFV-based storage. It has its own registry baked into the product and is open to working with other third-party registries.
All the pools are isolated from the network perspective and require pool-level authentication. If you have a rogue container that’s taking up a lot of CPU resources, it cannot go beyond the limits you have set for it. The resources are auto-scaling, elastic and isolated. You can add or remove machines at any time to this cluster.
It’s all designed to be tightly integrated with VMware and Windows.
The big challenge for ContainerX will be differentiating itself and catching up to an array of competitors, Jay Lyman, research manager, Cloud Management and Containers at 451 Research wrote in a recent report.
“The growth of containers, and particularly production use, may be a good sign that ContainerX’s focus on traditional and vSphere experience and expertise for container deployment will appeal to enterprises,” he wrote.
It addresses a critical need among enterprises using containers for consistency among development and production environments, he said. Container Pools consolidate development, testing, staging and production, and eliminate the need for separate software stacks for each step, division or development team in the process.
But he highlights the strong competition, including Amazon’s EC2 Container Service, Google Container Service based on Kubernetes and Joyent’s Triton Elastic Container Service, CoreOS, Mesosphere, Cloud 66, Kismatic and Shippable, not to mention PaaS players Apprenda, which recently announced integration with Kubernetes, Engine Yard, Pivotal with Docker support in its Diego orchestration software, and Red Hat, which has incorporated Kubernetes into its OpenShift PaaS and partnered with Google.
Apprenda, CoreOS, Docker, Joyent, Pivotal, Red Hat and VMware are sponsors of The New Stack.
Feature Image: “x” by roadconnoisseur, licensed under CC BY-SA 2.0.