Containers / Kubernetes / Microservices

CoreOS is Funding Kubernetes Development on AWS

3 Oct 2015 2:17pm, by

An Amazon EC2 customer can install the Kubernetes orchestrator to manage container workloads using CoreOS with containers. That fact happened to have been brought to light at this time last year, just before the opening of Amazon’s re:Invent conference. A tested and confirmed version of the method now appears as part of Kubernetes’ official documentation.

Now, the week before Amazon’s 2015 edition opens in Las Vegas on Tuesday, CoreOS announced that its Tectonic container platform, which includes Kubernetes, will include an “official” integration with Amazon AWS. When asked by The New Stack what CoreOS means by “official,” company CEO Alex Polvi responded, “The two main differences are: 1) We, CoreOS, Inc., are funding the development of the project; 2) We are offering commercially ready support and testing for the entire stack via Tectonic.”

How could this unfold? There’s this new dimension to the conversation about Docker, the container ecosystem and the new platforms needed to build out microservices with an underlying, programmable infrastructure. It’s this programmable infrastructure that AWS has defined for the market for the past several years. But Google has its own infrastructure as does Microsoft. The question becomes how woven these different infrastructures will become in order for there to be the portability that containers are supposed to offer.

In the meantime, there are uses for Mesos, there are uses for Kubernetes, and new platforms, such as Nomad from Hashicorp, that are each viable in their own way. But it’s the support that is a question mark. Can you get Kubernetes support from AWS? Not now, you can’t. Will that become a reality at AWS re:Invent? That’s a question we can’t answer, but the unfolding possibilities have outcomes that could provide a more universal capability to use Kubernetes across a wider array of infrastructures.

CoreOS is calling its official tool its “AWS CloudFormation,” and has released documentation on how it works Friday. In a blog post also Friday, the company says its AWS-specific setup will enable Elastic Load Balancer integration, for directing traffic to selected microservices.

Amazon’s own EC2 Container Service (ECS) is a platform that includes Amazon’s own tools for scheduling and orchestration, although it does provide APIs for third-party orchestrators. Still, when Amazon launched ECS last April, the tech press largely perceived it as a competitive action directed against Google Container Engine, which is essentially Kubernetes offered “as-a-service.” (Microsoft Azure only released its own Container Service, based around Mesosphere DCOS, earlier this week.)

ECS was first announced by Amazon at re:Invent last year. When Amazon CTO Dr. Werner Vogels first described the full details of ECS in a company blog post last July, he cited a use case of an AWS customer that needed an Heroku-compatible PaaS that was capable of scaling out. When the customer’s engineering team examined the combination of CoreOS and Kubernetes, Vogels said, “the engineering team was small so they didn’t have the time to manage the cluster infrastructure and keep the cluster highly available.” The customer ended up choosing ECS, of course.

Vogels went on to contrast ECS from Kubernetes by touting how Amazon masks the complexity of the underlying system from its users. “Amazon ECS is fully managed and provides operational efficiency allowing engineering resources to just focus on developing and deploying applications; there are no clusters to manage or scale,” he wrote.

The CTO told another story just last July of the educational publisher Coursera, which he described as having come to Amazon with a monolithic publishing application, and asked that the company help it move to microservices.

“Of course, when you have a monolithic application that has many different pieces of functionality in it,” he explained, “it’s very hard to estimate what kinds of resources you need to actually make each of those threads work — whether it’s CPU, whether it’s memory, all of those. So they decided to go to a microservices architecture. First of all, of course, they started using Docker, but ran their own scheduler.

“And it turns out, that is way too hard,” he argued. “So making use of the EC2 Container Service with the new scheduler allows them to purely focus on building the applications they want to build, not on how you actually have to spread them over clusters. And it works really well.”

That statement speaks to AWS’s stance on its own competitive scheduling service.

We asked CoreOS CEO Polvi what Kubernetes and CoreOS would provide to AWS users that EC2 would not.

“The biggest difference is that Tectonic can run in many different environments: cloud, data center or your own laptop with a virtual machine,” he responded. “ECS is only available on AWS. Additionally, Tectonic is based on Kubernetes; ECS is based on their own specific AWS tools.”

While this new documentation is publicly available and apparently finalized, Tectonic itself remains in preview mode, Polvi added. “We are getting close to general availability,” he said. “Stay tuned.”

The New Stack will be covering Amazon’s re:Invent 2015 in Las Vegas throughout the week, with podcasts from Alex Williams and reports from The New Stack team.

CoreOS and Docker are sponsors of The New Stack.

Feature image: “[ C ] Marc Chagall – Scene de Cirque (1958)” by cea + is licensed under CC BY 2.0.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.