How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?

Finding Serverless’ Hidden Costs

Every new technology comes with risks, and these must be identified and mitigated. The talk ”The Real Cost of Pay-Per-Use in Serverless” by Ran Ribenzaft at AWS Community Day Tel Aviv covers several critical aspects of the cost in serverless. Let’s cover a few of them here.
Jan 31st, 2019 2:31pm by
Featued image for: Finding Serverless’ Hidden Costs
Feature image via Pixabay.

Nitzan Shapira
Nitzan is the CEO and a co-founder of Epsagon. He is a software engineer with over 13 years of experience in programming, machine learning, cyber-security and reverse engineering. He also enjoys playing the piano and is a traveling enthusiast, an experienced chess player and is addicted to sports.

Serverless, the concept of running software without managing infrastructure, is often coupled with pay-per-use.

Services, such as AWS Lambda, serve as an example: for every 100 ms your code is running, you pay a fixed amount. For example, a Lambda with 512 MB of memory costs $0.000000834. The best part is that you don’t pay when your code isn’t running — and when the server utilization in large organizations is below 20 percent, this brings substantial potential financial savings. Companies are already seeing significant savings by going serverless.

So, what’s the catch? Well, there is no catch. It does make a ton of sense. However, every new technology comes with risks, and these must be identified and mitigated. The talk ”The Real Cost of Pay-Per-Use in Serverless” by Ran Ribenzaft at AWS Community Day Tel Aviv covers several critical aspects of the cost in serverless. Let’s cover a few of them here.

Don’t Try to Win the Configuration Battle

When you deploy a serverless function, usually you first have to select its parameters: memory and CPU. In Lambda, the two go together. The price goes up when the memory goes up. Therefore, you should pick a low amount of memory to save money. Right? Wrong! Not enough memory for your functions can have several implications:

  • With lower memory, you also get lower CPU, which means longer running time, and can lead to a timeout. A timeout can be comparable to a desktop computer that is being turned off, which is a very undesirable behavior;
  • Longer running time also costs more money. So decreasing the memory can have the reverse effect.

This AWS Lambda performance benchmark discovers surprising results: the performance is advancing together with the pricing — until a certain point at which more CPU doesn’t help anymore.

So picking the exact memory and CPU for your functions isn’t practical, at least not on the first attempt. It’s also dynamic, as the functions change and evolve and may require more or fewer resources.

What’s the solution, then?

Monitoring Duration and Memory Usage

As always, in complex systems, the most difficult problems occur when you don’t know what you don’t know. Observability is critical — not just to detect errors and performance bottlenecks, but also to know what the memory consumption and running time of your functions are. One of the most useful approaches to avoiding such problems is to be alerted not only after the problems happen, but also predict when they are about to happen, based on a static rule or a spike compared to the normal behavior.

API Calls are Expensive

One of the main observations is that performance issues translate directly to a higher bill. Every time the code is running slower, you are also paying more. The total serverless cost is composed of two separate numbers:

  • The time your code is doing your business logic;
  • The time your code is waiting for API calls.

As demonstrated in ”The Importance and Impact of APIs in Serverless”, a simple call to a popular service such as Auth0 can end up consuming more than 80 percent of the total running time of your Lambda function. This does not mean that you should not use third-party APIs in serverless — in fact, quite the contrary. They are key to creating fully serverless, scalable applications. However, you should be aware that the choices you make, and the way you configure these tools may have a greater effect in serverless compared to traditional applications.

Make Your Serverless Bill Predictable

Even though traditionally, the cloud bill may be complicated to understand sometimes, is it usually predictable — you buy a thousand virtual machines, and you know how much you are going to pay. With Lambda, and other recently introduced services such as Amazon Aurora Serverless, the bill is suddenly dynamic, and as such — it becomes unpredictable.

Cost Forecasting

One of the useful ways to be on top of your serverless bill is to use forecasting — according to the current period, you can estimate the total cost at the of the month. Forecasting is a simple and useful technique for eliminating surprises at the end of the month. A simple formula for cost forecasting is:

End of Month Cost = Current Cost * (days in month / today)

 For example, if there are 30 days in the month and today is day 15, and the total cost is $200, then the estimated cost at the end of the month is $400. This forecasting applies both for specific functions and for the total serverless cost. In Epsagon, the function view contains a column dedicated to the estimated cost of each function.

Cost Monitoring

Just as companies have been using performance monitoring tools to make sure their applications are running as expected; with serverless, it makes much sense to monitor cost as well. As a case study at our serverless monitoring company Epsagon, we shared a personal story of us having a significant scaling problem with a Lambda function that was running in a very high concurrency and ended up costing over $12K a month. Luckily, we were able to identify this problem after just a few hours using Epsagon, which we use to monitor our systems as well, of course. Other examples include a $50,000 bug a large company found and reported thanks to Epsagon’s monitoring features.

The Lambda Cost Calculator open source tool can help you understand how much you are expected to pay for your functions.

Last Words

Pay-per-use is an excellent concept in serverless applications that makes much sense and has significant financial benefits. It is essential to be aware of the fact that performance issues directly affect the monthly bill.

To make the most out of pay-per-use and prevent the risks of a high, unpredictable bill; techniques such as cost forecasting and monitoring are of great help. Specifically, APIs, which are common in serverless, should be used with caution and monitored as well, since they can quickly become the main bottleneck of performance and cost.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.