Analysis / Contributed / Op-Ed / Technology /

The Myth of Cloud-Native Portability

24 May 2017 1:00pm, by

The cloud native momentum is growing with a plenty of new platforms and supporting tools. These new platforms provide more and more capabilities for developers to quickly develop, deploy and manage a large number of microservices in an automated fashion.

But that momentum comes with a cost and you better be prepared to pay for it.

Recently I wrote about “The New Distributed Primitives for Developers” provided by cloud-native platforms such as Kubernetes and how these primitives blend with the programming primitives used for application development. For example, have a look below to see how many Kubernetes concepts a developer has to understand and use in order to run a single containerized application effectively:

A Kubernetes based Microservice

A Kubernetes based Microservice

Bilgin Ibryam
Bilgin Ibryam is an Architect at Red Hat and open source committer at Apache for Camel, OFBiz and Isis projects. He is a blogger, speaker, open-source enthusiast and the author of Camel Design Patterns and Instant Apache Camel Message Routing books. In his day-to-day job, Bilgin enjoys mentoring, training and leading teams to be successful with application integration, distributed systems, microservices, devops, and cloud-native applications.

Keep in mind that this diagram doesn’t include any of the supporting Kubernetes objects that the Ops arm of a DevOps team has to manage. Nor the additional application supporting tools (for log management, monitoring, tracing, service mesh, etc) that are also required before day 2 operations.

The chances are, the developers will have to write the same amount of YAML code as the application code in the container. More importantly, the application itself will rely on more the platform than it ever used to do before. The cloud native application expects the platform to perform a health check, deployment, placement, service discovery, running a periodic task (cron job), or scheduling an atomic unit of work (job), autoscaling, configuration management, etc.

As a result, your application has abdicated and delegated all these responsibilities to the platform and expects them to be handled in a reliable way. And the fact is, now your application and the involved teams are dependent on the platform on so many different levels: code, design, architecture, development practices, deployment and delivery pipelines, support procedures, recovery scenarios, you name it.

Bet on an Ecosystem, not a Platform

The above picture demonstrates how small your code is in the context of a Kubernetes microservice. But when we talk about a production ready microservices-based system, that picture is far from complete. Any system of a significant size will also require tools for centralized monitoring, metrics gathering, tracing, services mesh, integrated build and deployment tools, pipelines, etc.

The platform is just the tip of the iceberg, and to be successful in the cloud-native world, you will need to become part of a fully integrated ecosystem of tools and companies. So the bet is never about a single platform, or a project or a cool library, or one company. It is about the whole ecosystem of projects that work together in sync, and the whole ecosystem of companies (vendors and customers) that collaborate and are committed to the cause for the next decade or so. I see both of these aspects equally important:

  • Technology: considering that the transition to cloud-native is a multi-year journey and the benefits will come only in the long term success, it is important to bet on a technology that has the potential for the next 5-10 years, rather than a history from the last 5-10 year.
  • Culture: cloud-native is achieved through a combination of microservices, containers, continuous delivery and DevOps. And becoming cloud-native takes more than adding few dependencies/libraries to your application (as opposed to how it is promoted in some conferences). You may have to change the team structure and rituals, work habits and coding practices, and get used to consuming a still very actively evolving technology space. And that is easier if your company culture is somehow closer to the culture of companies that are developing or just consuming the cloud-native platform and the related tools. Little things such as making a pull request vs. filing a bug report, checking the upstream source code and open discussions for a new coming feature vs. waiting until the next conference announcement for the news can make a difference whether a team enjoys working with a platform or not. Cultural alignment and the human factor are as important as technological superiority.

The following does not represent the complete landscape, but I will try to group the main cloud-native ecosystems that come to my mind:

Mesosphere and Apache Mesos

Being part of Apache Software Foundations, Apache Mesos comes with its benefits (mature community) and drawbacks (slow moving). Born around 2009, it is a mature framework, which added support for containers (I mean the docker format here) and similar concepts such as Pods/Task groups recently.

Cloud Foundry and Spring Cloud

Again born around 2009, Cloud Foundry is one of the pioneers in the cloud-native world.  And when Spring Cloud is used with Cloud Foundry the platform blends with the application itself. Some of the features such as service discovery, load balancing, configuration management, retries, timeouts are performed in the services (JVM in this case). That is the opposite approach taken by platforms such as Kubernetes where all of these responsibilities are delegated to the platform or other supporting containers (such as envoy, linkerd, traefik). I have compared Kubernetes and Spring Cloud (notice that is not Cloud Foundry) in the past here.

AWS ECS and Docker Swarm

While Docker, Inc. (the company) is still figuring out what it is going to develop and what sell, Amazon has created a pretty solid offering using Docker technologies as part of AWS Elastic Container Services. ECS with Blox  (AWS’ open source container orchestration software) might not be anything huge in itself, but when combined with all of the other AWS offerings, it is a very feature reach integrated platform.

Not to mention Netflix which have been an AWS supporter from the era of VMs are transitioning into the containers world and are driving the innovation at Amazon ECS.

CNCF and Kubernetes

Kubernetes is one of the newest platforms in this category, but at the same time, one of the most active and fastest growing open source projects ever. That combined with the family of integrated Cloud Native Computing Foundation projects and supporting companies, makes the whole ecosystem a pretty strong contender in this category.

Being a late comer (2014), Kuebernetes has the advantage of growing on a container-centric architecture from the start. And also the fact that it is based on a decade-old Google Borg, means that the principles (not the implementation) are mature and battle tested at highest possible level.

Container Orchestrator Survey Result

Container Orchestrators in Sysdig’s 2017 Docker Usage  Report

And as you can see the result from a recent report from Sysdig, the cloud native users seem to appreciate all that.

Which one to choose?

Maybe you are thinking that as long as you package your application in containers you are portable across different cloud-native platforms with minimal effort. You are wrong. Whether you start with Mesos, Cloud Foundry, Kubernetes, Docker Swarm, ECS, you have to make a significant investment to learn the platform and the supporting tools, understand the culture and the ways of working, and interact with this still fast changing ecosystem of technologies and companies.

The goal of this article is not to compare these ecosystems, but to show how different they are, and to demonstrate that it will require a significant amount of time and money to enter one or to move to another one if required.

Kubernetes as the Application Portability Layer

The cloud native ecosystems are pretty distinct in terms of technology, process, and culture. But there is some consolidation going on even among them. Many concepts that are popularized by one platform are spreading to other platforms as well. For example, the concept of the deployment unit (Pod in Kubernetes) is now present at Mesos, it also exists in Amazon ECS as a task group. The concept of server side load balancing (Services in Kubernetes) and scheduling/placement with policies (the Kubernetes Scheduler) are also present in Docker Swarm, AWS ECS, etc. But this is how far it goes, and to transition from one ecosystem to another, a lot of effort will be required.

So how to avoid lock-in with a single vendor? One approach would be to stick with Kubernetes and accept that as the layer of portability between cloud and service providers. One of the reasons why Kubernetes is so popular is because it is not a single company toy but backed by multiple large tech companies such as Google, Red Hat (OpenShift), Docker, Mesosphere, IBM, Dell, Cisco and many others.

Another reason is that there are many cloud companies that offer Kubernetes as a service. If you use Kubernetes, then you can move your application among cloud providers such as Google Container Engine, Microsoft Azure, IBM Bluemix Container Service, or even on AWS through a 3rd party service provider with a minimal effort. This means Kubernetes API is the layer of portability for the applications between the cloud platforms, and not the container alone. A container alone is a drop in the cloud native ocean.

Cloud Foundry Foundation, the Cloud Native Computing Foundation and Red Hat are sponsors of The New Stack.

Feature image via Pixabay.

A digest of the week’s most important stories & analyses.

View / Add Comments