Modal Title
Serverless

What AWS Lambda’s Performance Stats Reveal

Apr 11th, 2019 8:44am by
Featued image for: What AWS Lambda’s Performance Stats Reveal
Feature image via Pixabay.
If you’re interested in going deeper into AWS Lambda and observability, I’m hosting a webinar next week together with Danilo Poccila, AWS Serverless Solution Architect.

Ran Ribenzaft
Constantly chasing new technologies (such a serverless), Ran loves sharing open-source tools to make everyone’s life easier In his current role, he is the co-founder and CTO at Epsagon — which offers monitoring for serverless applications.

We wanted to share what we’ve learned about AWS Lambda and its ecosystem based on over 100,000 instances of Epsagon monitoring and after over four years since Lambda launched.

The key metrics we cover include:

  1. Which runtimes are the most common;
  2. How much memory is usually configured vs. used;
  3. How many functions experienced a timeout;
  4. The number of functions per account, and the increase in adoption.

Configurations

The first thing to do when setting up a new function is to configure it. There are plenty of configuration options, but we have gathered the most interesting ones below

Memory configuration

It is worth understanding that memory affects (almost linearly) our CPU and IO share. You can read more about it in this post, but in the following chart, we can see that most people start with the default configuration (128MB). The next most configured option is 1024MB, which is a very large number — but is also the default for The Serverless Framework.

Duration configuration

Duration configuration is probably my favorite configuration option because every developer usually goes with the default configuration or picks an almost random number. We can clearly see that the default configuration wins with almost a fifth of all functions. Following comes 30 seconds, five minutes (which used to be the maximum before 15 minutes), 15 seconds then one minute.

Runtimes

It is well known that Node and Python are the leading languages for Lambda, but it’s interesting to dig even deeper and get the exact numbers for each version used. Node 8.10 is the clear winner with 51.7 percent of functions using it. After we see that Python 2.7 and Node 6.10 share the same amount, and Python 3.6 is following just behind them. The top four cover almost 90 percent of the runtime.

Code size

Looking at code size, it’s interesting to see how complex functions are getting. Since the graph almost distributes evenly, we can learn that Lambda functions are being used for almost any purpose — from the most basic code to the most heavy-lifted code with libraries. This chart does not take Layers into consideration.

VPC

For most of the developers, VPC is a burden. It requires manual configuration, and it affects the cold start performance (which, in the last re:Invent, we were promised that cold starts will get better). Still, it’s fascinating that almost a third of the functions are inside a VPC:

Performance

Now that we’ve reviewed some configurations, it’s time to monitor live performance data. With 10s of billions of invocations being analyzed every month, we are looking to explore how functions perform. Data in this section will be based on last month’s invocations.

Memory Usage

One big debate in Lambda is around how much memory you should configure for a function. We already know that memory affects the overall performance of the function, but here we were curious to understand the percentage of used memory out of what has been configured, and also how much memory is being used:

It is notable that we don’t use the max amount of defined memory, only a quarter of it, on average — and that’s definitely an accurate metric in case there is any doubt. We can also see from the used memory on (average and p50) that, usually, the code just does not require much memory, as the minimum memory configuration is 128 MB.

Timeouts

Making sure our functions run in the limits that we set for them is not a simple task. Some might think that a function never hits a timeout because, except for a single log line, the timeout does not appear anywhere. So, monitoring timeouts is hard, but what are the facts?

 

As we see, more than 10 percent of the functions experienced at least one timeout, while the percentage of timeouts out of total invocations is very low.

Ecosystem

A Lambda function obviously can’t live on its own — you need to set up a trigger, and you probably have more than one function. Let’s explore some of these numbers as well.

Number of Functions in an Account

Many say that it’s really easy to get started with a single function. That being said, it’s interesting to see how far companies go with the number of functions. It is pretty surprising to see big numbers in the buckets of 11-100, and obviously start to see more and more companies crossing the 1,000 function mark.

Growth in the Number of Lambda Functions

It is also interesting to understand the growth rate of accounts once they begin to put effort into serverless development. Our numbers show that the average month over month growth rate per an Epsagon account is:

Triggers

Triggers allow us to call our function according to a specific event. With a lot of triggers coming from other  AWS services, let’s see which are the most common. It was no surprise that API Gateway is the most common one, but it’s great to see that it’s not the only one.

Conclusion

It’s exciting to see how the serverless community and ecosystem continue to grow. We hope you enjoyed discovering and learning about the configurations, performance, and usage of AWS Lambda users.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.