Modal Title
Cloud Native Ecosystem / Containers / Kubernetes

4 Predictions for Where Kubernetes is Headed in 2018

Dec 19th, 2017 9:00am by
Featued image for: 4 Predictions for Where Kubernetes is Headed in 2018

Murli Thirumale, co-founder and CEO, Portworx
Murli Thirumale, co-founder and CEO, Portworx previously served as co-founder and CEO of Ocarina Networks, Inc. He also served as vice president and general manager, advanced solutions group, Citrix Systems, Inc. Thirumale holds an M.B.A. from Northwestern's Kellogg Graduate School of Management as an F.C. Austin Distinguished Scholar.

The containers, DevOps and cloud corner of the internet has had a heck of a 2017! If I had to summarize 2017, for our community, in a word it would be Kubernetes. The rise of Kubernetes reached fever pitch in Austin earlier this month at Kubecon, a four-day love fest attended by more than 4,000 developers, DevOps engineers, architects, IT execs and industry gurus. It was exciting hearing about all the ways Kubernetes, as a platform for building and running cloud-native applications, is already changing real businesses with real customers.

There are two ways to look at this euphoria. One is best summed up by the adage, “Enjoy the party but dance close to the door.” This view says that the high times the Kubernetes community is experiencing now are a just a blip before the whole thing crashes and burns. Let me start by saying I don’t think this is going to happen. Kubernetes provides too much value for folks to simply move onto the next big thing such as “serverless.”

Prediction 1: Kubernetes projects in the enterprise will ultimately succeed, but there will be many bumps in the road.

But sitting at the end of 2017 and looking out to 2018, I think that another expression from pop culture is more apt: “Hang on, it’s going to be a bumpy ride.”  My prediction for 2018 is that Kubernetes projects across the Fortune 500 will make a soft landing by the end of the year, but there will be some turbulence before they reach their final destination.  Here’s why:

Kubernetes Is Difficult

Let’s start with the obvious: Kubernetes is complicated.  Kubernetes is often described as elegant by enthusiasts. But its elegance doesn’t make it simple.  String theory is elegant, but understanding it with anything except the most imprecise analogies takes a lot of effort.  Kubernetes is the same.  Using Kubernetes to build and run an application is not a straightforward proposition.

Cultural Change Is Difficult

Couple this with the fact that the entire culture of enterprise IT is shifting from a command-and-control system with rigorously defined roles for Dev and Ops driven by the CIO to a democratic, messy, “DevOps” culture.  So not only are we trying to implement something hard, we’re doing it while our organizations undergo massive cultural change. That is never easy.

Business Requirements Are Difficult

On top of this, the business requirements driving application development are changing.

“Must be able to run on any major public cloud.”

“Must encrypt all customer data at rest and in flight.”

“Must be able to store and process 15TB of data per device per day.  Plan for 1 million devices.”

Meeting any single requirement is trivial.  Meeting them all for complex, mission-critical applications with users worldwide is a different story. In Kubernetes’ defense, increasingly stricter business requirements are driving the need for a system like Kubernetes.  If you don’t have to support one million concurrent users, you don’t need things like ingress controllers.

Having worked with many Fortune 100 enterprises on Kubernetes projects, I’ve personally seen these dynamics play out. A team will be given an aggressive business goal and identify Kubernetes as the right platform to solve the problem. The team will typically put together a list of phases with measurable milestones, understanding that they need to walk before they can run. But even with these constrained milestones, they will encounter issues related to configuring and running Kubernetes dependencies such as etcd.  Or they will run into networking issues. Or some minor compatibility issue with their cloud, or OS, or version of the container image they are running.

None of these problems are insurmountable, but they bring people off the euphoric high pretty quickly as project timelines slip. As a result, teams find themselves in the long hard slog of getting through issue after issue that pops up on their internal Jira while executives start to second guess or question this approach to solving the problem. Was Kubernetes the right choice? Are there simpler solutions?

Prediction 2: The complexity of building and running Kubernetes applications will be addressed by the rise in Kubernetes platforms

I said before that I don’t think the community will just move on from Kubernetes.  So how do these problems get solved? I believe that Kubernetes platforms will rise to address them.  The Cloud Native Computing Foundation (CNCF) has realized that implementing Kubernetes is a challenge and has thus created a certification model for platforms, Kubernetes Certified Service Provider. Currently, kubernetes.io lists 16 KCSP providers. The largest Kubernetes platform, Red Hat OpenShift, is notably missing from this list, but I see this more as an indication that OpenShift needs less external help from CNCF at this point, since it already established as an authority of running large-scale Kubernetes applications.

Prediction 3: We will see nearly 50 Kubernetes Certified Service Providers by the end of 2018

2018 will probably see a three-fold rise in the number of Kubernetes Certified Service Providers, but the bulk of customers will go either with the distribution run by their cloud, such as Azure Container Service (AKS), Google Container Engine (GKE) or Amazon’s new service, or with a cloud-agnostic platform such as RedHat’s OpenShift or Tectonic.  Why a customer goes with a particular option is largely a function of complexity and familiarity with the service provider.

Prediction 4: 70 percent of customers will opt for the Kubernetes platform from their cloud provider, OpenShift or Tectonic

Smaller customers will probably opt for the fully packaged offering from their cloud provider, even though they will be locked in and find it difficult to implement multi-cloud strategies.  These platforms offer fewer options for custom configuration, but this is outweighed by their greater simplicity. Larger enterprises will more often opt for a cloud-agnostic platform not only because such platforms allow for more customization but also because they are less likely to be locked into their cloud providers, something that is helpful when it is time to negotiate price.  Likewise, having a platform that can run anywhere makes it simpler to run an application on multiple sites, which is increasingly a requirement for availability-consensus enterprises.

Since Portworx is a provider of Kubernetes storage, you might be wondering what we make of all this. Our view is that solving persistent storage will continue to be a requirement of almost all Kubernetes projects, which is why Portworx integrates with all the major Kubernetes platforms. It is often said that 99 percent of enterprise applications are stateful. We believe that a significant portion of these apps will run on Kubernetes and that enterprises will need high availability (HA), backup, encryption, shared volumes, dynamic provisioning, resizing and other operational features that they have come to expect.

What 2018 will truly bring, I can’t know. But one thing I do know is that finding out is going to be a lot of fun.

The Cloud Native Computing Foundation and Red Hat are sponsors of The New Stack.

Feature image by Brenda Godinez on Unsplash.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.