Cloud Services / Serverless / Contributed

Serverless Challenges We Need to Overcome

18 Mar 2019 11:00am, by

Aviad Mor
Aviad Mor is the co-founder and CTO of Lumigo, the serverless intelligence platform. For over a decade, Aviad oversaw the development of core products at Check Point from inception to wide adoption.

Serverless is growing — not just the technology or the adoption, but also what we mean by the term “serverless” itself. Instead of just Functions-as-a-Service (most famously AWS Lambda), we can understand serverless today as applying to all fully-scalable event-driven infrastructures where you don’t manage the server. A typical serverless application will include multiple services and functions, all connected together — services like databases (DynamoDB), file storage solutions (S3), messaging services (SQS) and API triggers (API Gateway), often connected together by serverless functions. Together, they jointly create an end-to-end application.

This is a step beyond the containerized microservices that have become familiar in recent years. Serverless apps are effectively built from multiple “nanoservices” that each performs a single, specialized role, within a specific microservice.

Fundamentally, as Simon Wardley argues, serverless changes computing into a commodity like electricity. When electricity first became widely available, new companies went with the cheaper, lighter, easier-to-use electric machines. Older businesses, though, already had their kerosine-powered behemoth production lines and were much slower to switch.

Serverless faces a similar adoption pattern, though at a much faster rate. New software companies are becoming the first to take advantage of the flexibility, scalability and cost savings. Older, more-established enterprises are starting to experiment with serverless as an extension to their existing monolith or microservices based applications, but wide use of this technology might take more time.

Serverless is not an all or nothing methodology. Teams don’t need to replace their entire technology stack; they can start with some small components connecting to their legacy and, as time goes by, transfer more and more workloads when it makes sense.

We’re noticing that this is a familiar adoption pattern for serverless: a software engineer in a company will start playing around with serverless functions for fun, and maybe connect them to Slack or Alexa to provide some specific added functionality. From there, it’s a short step to using Lambdas for automating some basic IT Operations or monitoring tasks like using them to check on resource usage once an hour or to take database snapshots. The next step is usually starting to use Lambdas in a business-oriented application, like for data processing in an ETL pipeline.

Some early innovators like CapitalOne, iRobot, Netflix, AirBnB are using serverless extensively. But other companies will incrementally move towards there, adding more serverless components as they go.

Amazon, the biggest cloud provider, announced loads of new serverless products and features in November’s Re:Invent, and other cloud providers are all making their own advances in widening the serverless product space.

There are still some limitations that need more work. The “cold start” issue for functions and services is a concern for many serverless adopters (perhaps a misplaced concern in some cases, but that’s another story!) Function concurrency limits can be another problem when you start to scale.

Logging is another concern. One serverless application uses multiple components, services, programming languages, regions, and sometimes even several cloud providers. Individual services can create logs, but following the application, flow is a challenge because there’s no simple way of tracking a request from one service or function to another. If there’s a problem, it can be hard to pinpoint the root cause failure: did the function time out because of a bug in the function? Or maybe elsewhere in one of the distributed components? Was the data malformed somewhere upstream in the request flow? There’s a need for the right tools to provide the needed visibility and observability to ensure quick and simple troubleshooting and optimization.

More broadly, there are two big challenges that are holding back the adoption of this new and emerging technology. The first is the lack of knowledge by the engineering teams and the second is the lack of supporting tools in the technology ecosystem, to help developers with everyday tasks. The first challenge can be addressed by the cloud vendors sharing knowledge and advice through developer advocates, and by the developer community sharing its knowledge and experience through publications, blogs, and community events.

The second challenge is addressed by many startups that build the needed tools to support the new technology. Companies like  Serverless Inc. PureSec, Lumigo, Dashbird and more are filling this gap.

We are in a transition period, where software organizations are experimenting in the serverless waters. The best way to adopt new technology is a phased approach, starting with a small scale project and growing from there. Managing this transition likely to be a growth industry for software architects and specialist consultancy firms who can help companies manage the change.

Cloud computing is growing dramatically. Engineering methodologies should evolve to support cloud native applications, with the emphasis on increasing development velocity. Serverless methodology does exactly that, moving non-critical tasks to the cloud provider and focusing team efforts on the business logic to move faster. This is the promise of serverless, and we’re seeing it begin to be fulfilled.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.