TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
DevOps / Observability / Serverless

Serverlessconf New York: Monitoring Serverless Performance to Manage Cost

Oct 10th, 2017 1:03pm by
Featued image for: Serverlessconf New York: Monitoring Serverless Performance to Manage Cost

Monitoring serverless applications at scale to ensure performance, reliability and global accessibility is now also helping production adopters understand and tweak their architecture cost decisions. While developer velocity may well be the major benefit of serverless, it is the “don’t pay for idle” mantra that is often the business driver for introducing serverless. Proving value by ensuring costs are minimized and performance optimized becomes the new DevOps endgame.

At Serverlessconf in New York City Wednesday, several speakers reiterated that serverless implementations are “easy to get up and running,” but it is what happens next that ends up stumping adopters: Performance monitoring, cost management, and managing external, often downstream factors that may not have been top of mind initially, all become potential roadblocks.

Serverless Experimental Projects

Clay Smith, a developer advocate at New Relic, described a project of running Headless Chrome on AWS Lambda. He suggested the three main performance questions that serverless production adopters must constantly monitor are:

  • Is it fast?
  • Is it fast concurrently?
  • What’s the potential worst case — that is, what are the outliers — and why?

He gave an example of using the new AWS X-Ray traces product to separate guessing from reality. Cold starts (the time it takes to invoke and start a new Lambda function, which can create latencies that slow down overall application performance), are often considered the main culprit.

But using X-Ray, Smith analyzed several days worth of data looking at a function that ran every four minutes within his application architecture. To date that, Smith had to create a lambda function that analyzed the X-Ray data.

Cold starts represented only 1.49 percent of the time, so was not a key driver of performance latency. Instead, other factors, such as language runtime and function size need to be considered. What Smith found was that tweaking memory usage was the most powerful performance enabler.

This is where cost management comes into production deployment decisions. Counterintuitively, Smith suggests that increasing CPU for function usage, while meaning paying more per request, the duration of execution of the function drops, so “there is a performance sweet spot where increased memory got cheaper because functions run faster,” Smith shared.

Serverless Applications from Enterprise

For Quest Software, which runs on an Azure Functions stack but integrates with AWS Lambda, understanding monitoring and using that to identify cost implications was also a key part of the production implementation.

Quest on Demand is Quest’s Office 365 management solution, a SaaS offering that allows enterprises to manage their Office 365 users and set policies to determine which features individual users can use, while also ensuring backups can be rolled back if any user hits the wrong button. The product has been in general availability for 3 months, and to date has processed over one million customer objects.

The architecture stores static content in S3 on a content delivery network and users are authenticated via an identity broker, which then issues a JWT token which defines the user’s permissions and access rights. That then feeds into the core services of Quest on Demand, Quest’s Microsoft technologies management package (which runs on Azure Functions). All business logic components are dedicated functions, grouped by business purpose. From there, specific services, including Vault, can be introduced, as well as integrations cross-platform into AWS Lambdas, for example.

Curtis Johnstone, Quest distinguished engineer and Microsoft MVP, argued that for implementing serverless applications at scale in production environments, preventative strategies become crucial. Security needs to be a first-class development concern. “It will cost two to three times more if you need to go back and add security,” Johnstone warned. Similarly, performance and scalability testing (not just functional testing) need to be done throughout development, preferably in an environment that will mimic production.

The Limitations of Debugging in Serverless

But Erica Windisch, chief technology officer and co-founder of IOpipe, a performance management company for Lambda, says that development testing isn’t always as straightforward as in traditional application development. “You can debug your code before you run on AWS Lambda. But if you run in the cloud, a lot of those debugging tools are unavailable,” said Windisch. “Application instrumentation is difficult in Lambda. In particular local dev+test approaches are still maturing under Lambda.”

Smith, Johnstone and Windisch all push for application instrumentation to be in place when deploying serverless applications in production. For Quest on Demand, Johnstone’s team uses Azure Application Insights and exports that data into ELK so that they can add Lambda data and get a single view into their entire application performance across cloud platforms.

Performance Monitoring and Cost Decisions

But monitoring isn’t just about ensuring performance. Instead, analytics are helping Quest better manage costs associated with their serverless architecture. “Consuming a serverless offering in one geographical zone can create a big performance hit, so you need to keep an eye on the platform’s product roadmap to make sure that the offerings you are using are available in the geozone served by your application,” Johnstone said.

Johnstone said monitoring analytics helps identify the most appropriate cost model for serverless. Being able to choose a serverless plan that matches the resource need and cost profile of the workload becomes essential, and monitoring data provides that insight. And cost decisions then can impact on performance in a vicious cycle: the risk with choosing the wrong plan is that resources get exhausted faster and then performance drops.

Enterprises and startups alike are looking for ways to reduce the cost overhead of their cloud choices, and serverless suggests itself as an appealing solution. In this business context, developers are driven to solutions that will allow enough monitoring to ensure that the cost advantage is realized, even as architectures grow more complex and add more functionality. In serverless, performance monitoring and budget decisions enter a symbiotic relationship.

Microsoft is a sponsor of The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.