How, and When, to Deploy Serverless

26 Sep 2018 9:59am, by

We’ll say it: serverless is not going to work for all IT infrastructures. And as serverless becomes more popular, many organizations are adopting the platform for the wrong reasons.

Among the very good purposes to “go serverless,” the platform does away with many of the cumbersome tasks associated with serverless management, since a third party maintains the actual physical server. The typical pay-as-you-go model, often consisting of ephemeral containers on a functions as a service (FaaS) platform, represent an additional benefit — thus removing redundant server capacity while adding cost savings the flexible price model offers.

However, unfortunately, it is tempting to think serverless can solve many deployment and resource management issues when it really cannot.

“Serverless platforms are still relatively new for most organizations and many developers start implementing solutions because of the scalability and ease of deployment. They later discover that it is incredibly difficult to ensure low-latency response times, both because of the cold start time as well as the cost of starting the runtime for each request,” Zach Ozer, vice president of engineering at Clubhouse, said. “Optimizations, such as a warmup script and reduced package sizes can help, and CDNs are invaluable when caching is an option. However, developers should carefully consider their performance requirements before going serverless, as a standard HTTP server may be a better fit for their needs.”

Serverless, of course, can offer great advantages as mentioned above — especially if you are not building or deploying unsophisticated code or apps. For a developer or a small team working on a pilot project, nothing can be easier than building, say, a web service application. The app is then deployed and stored with very little extra code on a serverless platform with many available frameworks to do that.

With serverless frameworks and other functionalities FaaS offers, many tasks otherwise associated with scaling applications and software deployments to servers become very simple, Eric Schrock, chief technology officer, for Delphix, said. “But to a large degree, serverless has solved that by just pushing complexity into the connective tissue of the cloud or other things that connect those functions together,” Schrock said. “Who, for example, is responsible for managing data schema transitions in this distributed architecture? It’s like the “Wizard of Oz” with the man behind the curtain, and when you look behind the curtain, you to see it is like the glue that you need to put together to make it all work.”

Leading tech firms have successfully built serverless, as well as Kubernetes and microservices platforms to massive scales — and have been very vociferous about their successes. But for most organizations, building serverless to massive scale does not necessarily represent a business case to follow.

“The Netflix’, Googles and Amazons get a disproportionate amount of media tension. But the reality is their approaches and architectures aren’t actually right for most companies, such as for building an app inside your company, with continuous CI/CD serverless deployments with rolling upgrade, which you don’t necessarily need,” Schrock said. “But serverless totally make sense either as glue within a cloud ecosystem or when you are deploying something out of scale where the benefits are worth the cost.”

Understand What You Need

Serverless still has a long way to go before if and when it ever becomes a mainstream platform. In the meantime, before that happens, organizations’ DevOps generally require a greater in-depth understanding on an industry-wide scale of what exactly serverless is and how it can or cannot help them.

“At this point, I think a lot of people are still making assumptions as opposed to gauging practical business requirements about how serverless can help them,” Jim Scott, vice president, enterprise architecture, for MapR. “There are still not enough people doing it or asking for it who have a solid enough understanding to say ‘yeah, that’s exactly what should be in place.’”

If serverless is right for an organization, DevOps will and should thus know what they’re actually looking for, whether they are adopting serverless for the first time or not. “They should know if it’s done right, they won’t have to worry about wrapping the code into a container, for example,” Scott said. “Software is automatically deployed into a container, as is the storage, and then it is integrated as part of a larger infrastructure.”

The Case for a Standard Cloud API

When serverless is right for an organization, it is prudent to avoid being locked into a single FaaS vendor and/or API, especially for multicloud deployments.

“When it comes to serverless, I think people want to have a standard API to use so they can move around freely, like if being able to run the same software between multiple clouds and on-premise,” Scott said. “Otherwise, if they want to run the same business logic in two places in serverless, they would have to rewrite it.”

Adopt Now, Pay Later

Developers are often the first to benefit from serverless in the immediate, but the advantages they gain in terms of agility may come at a cost elsewhere in the organization, Armon Dadgar, founder and co-chief technology officer of HashiCorp, said. “Typically, this means developers are directly submitting containers or functions without following established processes,” Dadgar said. “While faster initially, this can create a mess operationally.”

Organizations adopting serverless shouldn’t forget all the standard best practices, including having a proper CI/CD pipeline, versioning the code, using infrastructure as code to define how applications are deployed and using proper secrets management, Dadgar said. “Otherwise, developers will move quickly and create a mess that needs to be cleaned up by operations and security teams,” Dadgar said.

Given that it is still early days for the ecosystem, organizations should look at how serverless platforms can holistically fit into their infrastructure offerings, Niraj Tolia, co-founder and CEO of Kasten, said.

“It will usually make more sense to deploy platforms that are a part of a spectrum,” Tolia said. “Further, organizations need to do a careful workload analysis to prevent surprises down the line in terms of cost and complexity.”

Maintaining observability and monitoring capabilities are, of course, critical.

“People deploying serverless technology often restrict their thinking to the small piece of code behind serverless function, leaving the operational realities of maintaining and understanding that code as a literal afterthought. In a production environment, and especially during a fire, this can lead to great difficulties with root-cause analysis and debugging,” Ben Sigelman, co-founder and CEO of LightStep, said. “By building observability into serverless software, it can start off on the right foot: as a visible, maintainable piece of the larger distributed application.”

The Kubernetes Equation

Serverless can offer potentially great leaps in agility and cost savings, in spite of the above-mentioned caveats. But what serverless offers is not necessarily compatible nor exclusive to Kubernetes or microservices platforms. To begin with, Kubernetes is used for orchestrating multiple containers in order to deploy large, scalable and complex applications, David Simmons, the IoT developer evangelist at InfluxData, said. Serverless is appropriate for deploying specific and small functions for an application, Simmons said.

“So, let’s say you have a large, complex, application that handles many hundreds of thousands of interactions simultaneously,” Simmons said. “You could divide parts of that application up into individual services and spread them across a bunch of containers, all orchestrated by Kubernetes so that they can scale up and down as needed.”

But in the case of a single database lookup function, it could be implemented as a serverless component or a single function, such as looking someone’s name up in a database, Simmons said. It only runs when someone directly calls the function and there is no underlying container to run it. “If no one calls that function, it never appears. Now, that function can look someone up in a database that is run in a container, managed by Kubernetes,” Simmons said. “They are not mutually exclusive.”

InfluxData is a sponsor of The New Stack.

Feature image via Pixabay.

This post is part of a larger story we're telling about serverless technologies.

Get the full story in the ebook

Get the full story in the ebook

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.