Why Serverless vs. Kubernetes Isn’t a Real Debate
Kubernetes and serverless have more than deserved their status as exciting and powerful platforms that offer organizations tremendous boosts in agility, scalability and computing performance in a number of ways. However, it is easy to forget that Kubernetes offers advantages that serverless alternatives do not have — and vice versa. The key to the successful deployment of either alternative is to know when and how to decide whether Kubernetes or serverless offers the best fit.
The Whys of Kubernetes
Kubernetes itself was designed for cloud-scale computing — deployments at enormous scales like you’d see at Google, where it was first developed. It has been adapted for use at smaller scale and is available on most of the large cloud providers, which accounts for its explosive growth over the past few years. The growth of Kubernetes has far outpaced all other forms of orchestration software according to user surveys from the Cloud Native Computing Foundation (CNCF), which has taken over ownership of Kubernetes.
Since its debut, Kubernetes has gone mainstream. But just as there was pain in moving from mainframes to client-server, there are still significant pain points in adopting a fully container-based architecture, even one orchestrated by Kubernetes. Scaling is not instantaneous — you have to wait for a container to come online — and there are still significant management issues to be undertaken. According to the CNCF, storage, security and networking issues remain top concerns for those deploying their architectures via Kubernetes.
Maybe It’s Serverless
Serverless architectures — which in many ways is simply a repackaging and re-imagining of microservice architectures — is competing with Kubernetes because it allows for the scaling of applications and deployments without the complexity and configuration headaches of Kubernetes, or even containers. But don’t confuse the two as being equal.
Also known as Functions as a Service (FaaS) serverless architectures — and yes, they still need servers on which to run — are more event-driven architectures whereas containerized applications are, in essence, still fairly traditional applications just divided into many smaller parts or services. With a containerized application it will never entirely be shut down. Even if no one is accessing it, the containers will still need to exist and run. You can scale them down to single instances, but they will still be there and still be costing money.
A serverless application, if there are no requests for any of its functions, can drive costs to zero. they essentially cease to exist unless they are explicitly accessed. This can lead to dramatically lower costs, and also much faster scaling. The more a serverless application is accessed, the larger it scales.
The idea that serverless architectures will replace containerized applications does not seem to be a rational proposal. Not everything can be reduced to an ephemeral function. Some applications will always require the ability to persist data and state while an application is running, and this is not something that Serverless architectures are particularly designed for. But interest in Serverless is nevertheless growing rapidly.
According to MarketsandMarkets Research, for example, the FaaS market is estimated to skyrocket from $1.88 in 2016 to $7.72 billion by 2021.
However, this is not a zero-sum game and the growth of Serverless does not necessarily portend the death of Kubernetes and containers. In fact, it may even expand the usage of Kubernetes, at least by the major FaaS providers as a way to scale their serverless offerings.
Serverless architectures are likely to expand as a way to further drive down costs by only paying for exactly the services that are used and not paying for the overhead required to run a container or a group of containers but as with everything, there is a tradeoff. Serverless code that is infrequently accessed, while it won’t cost as much to run, may suffer from increased latency in cases where either a runtime — like Java — or the underlying container, is brought up to service the request. These additional latencies may or may not be acceptable.
From a developer perspective, however, FaaS can offer a boost in productivity and developer happiness. Developers can push code in smaller pieces more rapidly into production without configuration and management overhead, leading to increased productivity.
Application development and deployment strategies, like everything in computing, are constantly evolving. Often the movement from one architecture to another signals the end of the first implementation, but this is not always the case. There’s also not a one-size-fits-all solution, at least at this point, to solving all the problems with delivering applications cheaply and at scale. As with any deployment model, there are trade-offs between cost, performance, and manageability that need to be taken into consideration.
Kubernetes — and containerization in general — has its rightful place and the rapid adoption and growth of the Kubernetes market is proof that it is filling a need in the marketplace. I don’t see the need for containerization, and with it the need for container orchestration, going away anytime soon. But it is not always the right solution.
Likewise serverless FaaS is obviously filling a need in the market and is exhibiting significant growth overall. Growth does not necessarily imply fitness for purpose, of course, but markets have a tendency to self-correct to compensate for that.
Again, the Kubernetes vs. serverless is not a zero-sum game. The growth of serverless does not signal the death of Kubernetes. Each has a significant role to play in the development and deployment of modern applications. Application deployment has been on a steady trajectory towards smaller, more manageable, more cost-effective and developer friendly architectures for the past 20 years and there’s no reason to suspect that trend will not continue. While it’s possible that serverless is the logical conclusion of the abstraction of applications to their most basic components, not all applications can be delivered in such a way. Equally true is that some applications, either for reasons of persistence or scalability, will require containers, which will require orchestration and management.
There is no reason that these two technologies can’t continue to show significant growth without directly competing with one another.