Serverless promises to take out much of the heavy-lifting associated with IT maintenance and operations, so that a third party does more of the admin work relating to server maintenance, compared to what an organization would otherwise have to do for container orchestrations and microservices platforms. Using serverless platforms saves resources by, in part, allowing an organization to use a pay-as-you-use model for the use of servers.
There are also wrong and right ways to migrate to serverless resources. Here are a few things to keep in mind before making the leap.
Don’t Sweat the Semantics
Team members on DevOps who may have heard about serverless and want to explore what it might offer their organization if they migrate to the platform, may at first be put off by how much confusion exists about what serverless really means. As its name does not imply, for example, “serverless” applications actually run on servers.
The use of the word “serverless” dates back to at least 2012 when Ken Fromm used the term in a ReadWrite paper entitled “Why The Future Of Software And Apps Is Serverless.” A more precise and still applicable definition appeared in a seminal blog post written by Mike Roberts, partner and co-founder of Symphonia, on Martin Fowler’s site in 2016. He writes serverless architectures are “application designs that incorporate third-party ‘backend as a service’ (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a ‘functions as a service’ (FaaS) platform.”
“By using these ideas, and related ones like single-page applications, such architectures remove much of the need for a traditional always-on server component,” Roberts writes. “Serverless architectures may benefit from significantly reduced operational cost, complexity, and engineering lead time, at a cost of increased reliance on vendor dependencies and comparatively immature supporting services.”
The above definition describes serverless offerings a number of vendors provide, including Lambda by Amazon Web Services (AWS), which is very widely used and offers both BaaS and FaaS services. Other serverless offerings include Cloud Functions (Google), IronWorker (Iron.io), Manta Functions (Joyent), OpenWhisk (IBM), PubNub BLOCKS (PubNub) and Serverless Docker (Docker).
Know What You’re Not Getting Into
Much of the beauty of serverless lies in its being able to allow organizations to save resources by being able to outsource the server and associated infrastructure management tasks to a third party, as mentioned above. A developer team, for example, thus does its work creating apps, while the serverless provider does all the data maintenance work on the server offsite.
“Serverless’ ease of use can improve developer efficiency initially since serverless computing abstracts developers further away from dealing with infrastructure so they can focus almost exclusively on building code for single functions,”Albert Qian, product marketing manager for the Cloud Academy, said.
Serverless providers can also allow organizations to scale up or back as their needs dictate while fretting over redundant server capacity is no longer a concern. “Serverless architectures let cloud providers handle the burden of managing servers and scaling to meet demand. Think of serverless as an abstraction upon platform as a service (PaaS) where all you have to do is upload your code, so when you need to scale a function up or down, you just scale that single function,” Qian said. “You thus don’t need to scale an entire system, a container or an application. Another real benefit is that serverless has built-in fault tolerance and high availability by design.”
Consequently, money can be saved by relying on third parties for many operations and data-management tasks. “One of the biggest advantages of serverless is that it offers a lower total cost of ownership. With FaaS as your compute layer, you pay only to invoke functions — and only when they run,” Qian said. “Cost savings can be significant, especially for lower-utilization workloads, since you only pay for the compute resources needed to handle requests instead of paying for idle servers waiting for requests to come in.”
However, by handing over server management to a third party, your organization also relinquishes considerable control — a key consideration when mulling if and when to migrate to serverless.
“Serverless is catching on so quickly because of the resource management problems it solves. For smaller companies not wanting the responsibility of maintaining a server, it’s perfect,” Jeremy Scott, a senior developer for Raygun, said. “With less maintenance responsibility involved, you get the job done, and it’s cheaper than using your own [server infrastructure]. But herein lies its limitations: when you hand over responsibility, you also hand over control and access, which makes monitoring and analysis of your service quite hard.”
In this way, analytics capabilities for microservices and container running on serverless platforms can become an issue. “You must understand your software, or risk making business decisions that aren’t backed by data,” Scott said. “So, if you’re using serverless then in some cases, you’ll need to add the monitoring to the client such as an app or browser or you can try to add exception handling to your serverless functions.”
While challenging, applying analytics software to serverless deployments is still viable. In the case of exception handling in serverless functions, for example, if it is possible to wrap the function in a try-catch and then send an https API request, for example, then an error reporting client, such as the one Raygun offers, can be used.
However, with an API-based service, monitoring control is almost certainly lost, while the “next-best option is to observe how that API behaves,” Scott said.
When deciding what can be ported to a serverless infrastructure, less-powerful computing performance is one drawback associated with serverless deployments. “There are hard limits on serverless functions. Anything requiring over 3GB of memory will have to be re-architected or it won’t run,” Cloud Academy’s Qian said. “It’s important to make sure that teams understand limitations and that they have the knowledge, experience and sufficient training to determine best-fit technology for any given workload. The ‘cool’ factor of serverless, or any new technology, can sometimes shroud critical thinking when it comes to architecture, and the result can be a spaghetti-like mix of functions rather than a coherent application.”
Like monitoring and analytics, securing your data on a serverless infrastructure can be challenging, given the obvious implications associated with relinquishing server management and control to a third party.
Serverless runtime environments, for example, are not standardized and are “tinted by the cloud in which they running, which makes it more difficult to secure them generically,” Rani Osnat, vice president of marketing for Aqua Security, said.
“For serverless, I’d say there are several key security issues,” Osnat said.
Certain security checks are especially critical when making the migration. Potentially vulnerable code used in serverless functions, for example, “needs to be addressed before it is used,” while enforcement controls to block deployment of FaaS functions that violate policy must also be used, Osnat said.
While challenging as mentioned above, organizations need to make sure the serverless platforms they adopt monitor data behavior in runtime in order to detect anomalies,” Osnat said.
Multitenancy challenges also can emerge when the migrating to serverless, Cloud Academy’s Qian said. “Even if customer workloads are isolated in virtual machines or containers, there can be security issues, especially if an application works with sensitive data,” Qian said. “Potential bugs in one customer’s code can affect the performance of another application, impacting customer service and application quality.”
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.