Huawei Launches a Kubernetes-based Container Engine

Joining an increasing number of companies, Asian telecommunications giant Huawei Technologies has released its own container orchestration engine, the Cloud Container Engine (CCE).
Ying Xiong, Huawei’s chief architect of cloud computing, announced CCE version 1.0 at LinuxCon North America, being held this week in Toronto.
Like orchestration engines from CoreOS and Apprenda, CCE will be based on Google open-source Kubernetes platform.
.@Huawei Cloud Container Engine (CCE) announced! Another platform based on @kubernetesio. #LinuxCon #ContainerCon pic.twitter.com/H821nCTlox
— Lee Calcote (@lcalcote) August 22, 2016
During his talk, Xiong discussed the growing use of containers in China. In 2016, Huawei found that 14 percent of companies were using containers in production, and another 23 percent were using them for test and development. About 44 percent had plans to adopt container technologies within the next six months.
While 14 percent is still fairly low, it is growing rapidly. It is up 250 percent in the past year. “To me, that means the tools are maturing,” Xiong said.
Of those using containers in production, about 42 percent were using a formal orchestration tool, such as Kubernetes, compared to 51 percent who were using scripts and other homebuilt tools.
In the age of the cloud, this approach is not scalable, Xiong said. “We would like to see more people using container orchestration tool,” Xiong said.
Orchestration is the process of placing the containers on the servers, provisioning the storage and network, as well as to keeping them running at a desired state, Xiong said.
How do you manage container applications? Over 50% responded "manually". Whoa. @Huawei #LinuxCon #ContainerCon pic.twitter.com/JWyaKRqfku
— Lee Calcote (@lcalcote) August 22, 2016
While today’s orchestrations offer these capabilities in some form, more capabilities are needed to bring orchestrators to industrial strength.
Schedulers should work on both a reservation system and also dynamically schedule containers using service level agreements. “You tell the scheduler what SLA you need for the application, and the scheduler should figure out how [many] resources you need.”
Both networking and storage should be abstracted as well, making them as easy to provision as compute power is today for containers, Xiong said. “It would be nice if we had the ability to autoscale storage pool and auto-discover the storage services for your containers,” he said.
On the networking side, “We need a common way to convey network policies in real time and on-demand. We should have an ability to schedule your containers on network resources because there are some applications that care more about network bandwidth and latency than CPU and memory.”
Engineer Lee Calcote contributed to this article.