TriggerMesh sponsored this post.
In the last 20 years, few technologies have seen a more meteoric rise than Kubernetes. The container orchestration platform was initially released in 2014, and a 2019 survey by StackRox found that more than 86% of organizations have adopted it, representing a 51% increase over the previous six months. Part of the appeal is users can run Kubernetes in the cloud as well as self-managing it on-premises. The StackRox survey shows that more organizations self-manage Kubernetes, 44% percent, than any other deployment model. Among managed Kubernetes offerings, Amazon EKS leads with 27%, followed by Azure AKS (16%), Google GKE (12%) and IBM Red Hat OpenShift (12%).
Related to the explosion of Kubernetes is the rise of serverless computing, which abstracts away the operating system, storage, networking and other systems so that developers only have to focus on writing the application code. According to Forrester Research, nearly 50% of companies are either using or planning to adopt serverless architectures in the next year.
This pace of technology standardization is rare in an industry full of vendors constantly striving for product leadership. The emergence of virtualization in the early 2000s as a game-changing infrastructure pattern represents the common scenario. Here, we saw a diversity of varied technologies from VMware, Microsoft, Red Hat and Citrix. In contrast, today we are seeing standardization around containers and a defacto container orchestration technology in Kubernetes. This means more energy is being devoted to enhancing the technologies and less on differentiating, which is good for DevOps practitioners.
Digital Transformation or Digital Augmentation
The growth of Kubernetes is really part of a bigger strategy around modernization often referred to as digital transformation. The goals for undertaking this type of initiative include improved operational efficiency, faster time to market and the ability to meet customer expectations.
There are two types of IT projects: greenfield and brownfield. Greenfield projects, where an application is built from scratch, get most of the attention these days. Think of greenfield as equivalent to building a new house with new materials and no legacy considerations.
We believe there could be even broader opportunities for serverless with brownfield projects. This is more analogous to remodeling a house where the bones are there but we update the facade that allows you to modernize the looks and the amenities of your “legacy house.” These are either in-progress projects or production apps built using legacy technologies.
To extend microservices and Kubernetes to serverless, greenfield projects may be suited for “pure-play serverless” — that is, lift and replace some or all of the application with a serverless architecture. Brownfield projects are often suited for augmentation through the addition of serverless functions, providing a gateway into the cloud native world.
With brownfield projects, DevOps teams can modernize APIs, complete extract, transform and load (ETL) tasks through workers hosted on serverless functions and develop applications on serverless infrastructure that can also be used to manage legacy infrastructure. These digital augmentations can extend the useful life of an existing IT investment.
Overcoming Cloud Native Obstacles
The obstacles to serverless are no greater than any other new technology. It’s simply a matter of knowledge and experience. Many developers are used to developing code and letting someone else execute them but with serverless developers are often more participatory in the deployment of applications.
However, many organizations simply don’t have the skillset in-house to deliver cloud native applications. Sheen Brisals, Amazon Web Services (AWS) solutions architect for The LEGO Group that has great success with serverless computing, shared the following keys to their keys to serverless success in an interview with New Relic:
Developing the organizational mind-shift to see serverless as an ecosystem of managed services as opposed to viewing serverless as just a bunch of Lambda functions.
It is often challenging (or even impossible) to show upper management a convincing like-for-like cost comparison of an on-prem or hosted environment versus a serverless estate. Without this direct comparison, it often delays the migration to serverless or management buy-in into serverless.
In essence, the real obstacles around adopting serverless and cloud native adoption is a matter of knowledge and experience.
Understanding Cloud Native Design Patterns
Traditionally, applications have been deployed with persistent data and services. Even as microservices evolved and while they remain stateless and ephemeral by definition, they also tend to always run in “always-on” virtual machines or containers. In the serverless world, functions are even more ephemeral, execute individually or in parallel, finish their task then scale down to zero. They are triggered by events and they are stateless. They are like a puff of smoke that rises and dissolves in the cloud.
Event-driven serverless architectures are reliant on events that indicate changes in a given system. Event-driven design enables loose coupling of services, which supports service abstraction and isolation, deployment flexibility and independent scaling. This is especially relevant to function platforms because functions are smaller in scope and so the loosely-coupled architecture enables functions and/or microservices to operate independently. Versioning can be done by bringing up a new service and in the case that service introduces some unwanted change, software developers can easily switch back to the earlier version. This process differs compared to monolithic architectures that recompile to incorporate changes in software libraries. This loose coupling versioning offers is key to enabling more smaller components to work effectively together while leveraging operational independence, auto-scaling and on-demand cost models.
Unlike availability monitoring, most IT administrators are unfamiliar with serverless functions that require a different type of monitoring. That’s because serverless functions are invoked on an event and shutdown or scale to zero upon completion. Therefore, serverless system states and health are inferred through properties of a system over time, typically through log data that allows administrators to see the results of inputs and outputs over time. There are a growing number of observability solutions, but they are typically not integrated with your existing monitoring suite (Datadog is a notable exception).
Serverless can reduce the burden you shoulder for security. Because serverless providers handle the infrastructure, network or host security, infrastructure no longer has to be managed with the same diligence as it’s been relegated to what we hope to be more knowledgeable experts at cloud providers. However, organizations still have to remain very concerned about managing security in serverless environments. New attack vectors have emerged that target serverless applications and hosting infrastructure, for example, and familiar attacks have been reimagined for serverless environments.
Stateless vs. Stateful Functions
Perhaps the biggest difference with serverless development is that functions are stateless, whereas most applications are stateful. Stateful applications remember preceding events or user interactions. Since serverless functions are ephemeral and scale to zero when not in use, state from one invocation of a function is not available to another invocation of the same function. To build a serverless application that requires state data, the state data must be stored in an external database or cache. While not the norm, there are a few technologies, such as like Microsoft Azure Durable Functions, that allow you to write stateful functions in a serverless environment.
Serverless, and cloud native architectures in general, are gaining massive adoption, but the learning curve for many teams is steep. It may be necessary for some organizations to simply wait for the end-of-life for some applications to become deprecated and then rewrite them. For example, the dead language COBOL is still alive and well in many applications. As this survey from MicroFocus shows, the code base is growing and companies using it prefer to modernize systems versus replacing them. This is one of many examples of where cloud native front ends could help extend the life of applications that may have to serve for decades rather than a few years.
Luckily, it’s relatively easy to toe-dip into serverless by starting off slowly with maintenance tasks and automation of relatively independent functions deployed in Amazon Lambda or the function-as-a-service (FaaS) provider of your choice. However, to fully benefit from cloud native, companies will require a thoughtful and strategic plan that incorporates new capabilities like serverless in the best way for applications at different places in their lifecycle. No matter whether you have a new greenfield project or need to augment and enhance a brownfield project, you can see benefits from serverless computing.
VMware, Red Hat and AWS are sponsors of The New Stack.
Feature image via Pixabay.