Over the past several months, a range of new monitoring tools for serverless have been entering general availability, a strong indicator of the maturing ecosystem of serverless.
Honeycomb, IOpipe, and Dashbird, for example, all offer new observability into the black box of serverless architectures. Much of the discussion at the Serverlessconf New York conference this week revolved around instrumentation, security and other aspects of managing serverless deployments.
Now Stackery, a serverless infrastructure monitoring tool, has announced version 1 (general availability) of its product, which had previously only been available to beta users.
Created by New Relic ex-staffers Nate Taggart and Chase Douglas, who were responsible for the New Relic Browser, Stackery offers an operations console aimed at providing enterprise-level monitoring insight into serverless architectures, with deployment automation for new production environments.
“AWS Lambda is great, but it leaves a lot to be desired in terms of production worthiness for serverless,” said Taggart, now the Stackery CEO.
“At the production-level enterprise level, you need some sort of build automation and monitoring beyond the type of Application Performance Management (APM) monitoring. Things like error tracking on the underlying infrastructure and logging, for example,” said Taggart. He explains that the lack of this observability creates problems because of the short-lived nature of the lambda. “You have to get the metrics out of it before the Lambda dies off,” warned Taggart.
Stackery’s Monitoring Focus
Taggart says Stackery aims to solve three pain points: architecture design, deployment automation and infrastructure monitoring.
The serverless paradigm encourages an infrastructure-as-code approach. Mapping the flow of data and business logic from multiple event sources, including databases, client-side input, data pipelines, and external APIs can quickly become complex. Stackery comes with an architecture mapping tool to help developers clarify and visualize event sources. The visualization is supported by AWS CloudFormation templates so that the resulting infrastructure can be stored in GitHub repos and updated using git.
Taggart said the goal is to help developers easily deploy the architecture as diagrammed automatically. Stackery handles all app build processes, installs dependencies and stores them in S3, and wraps code in error handling. Taggart says this enables users to build self-healing applications. For example, if a Lambda times out, a log is written, and future errors are “listened out for”. If it becomes clear that the timeout is happening regularly, that can trigger another application to manage the timeout lags.
With Stackery, logs are stored in AWS CloudWatch and users can jump to any resource (a data store or a Lambda, for example) and see the specific logs for that application. Taggart says in the traditional application deployment model, developers ship an application and run it on a single server that connects to an external database. But in serverless, complexity is increased overall, as an application may get a request, which goes into an event stream that touches a dozen or more external resources. “We monitor the number of invocations, how many are spinning up asynchronously, are they cold booting… insight into all the underlying architecture,” said Taggart.
Dividing up the Serverless Adoption Market
Understanding what is happening with adoption of newer technologies can be challenging. A recent TNS analysis by Lawrence Hecht showed serverless adoption on par with containers, but questions on the actual uptake and use of the tech remain. “We expect that many people equate serverless infrastructure with the ability to provide a service, any service when it is triggered by a real-time event. That is an important capability but at least in our mind, it is not the radical change in computing that serverless cheerleaders have in mind,” Hecht cautioned, while also sharing data that showed that survey respondents were using serverless technologies (either in a static or dynamic cloud environment) on par with containers.
At the most recent Serverlessconf, held in New York City, we could see the rapid ascendency of serverless in action. The last New York Serverlessconf, held a year ago in a crowded Brooklyn space, was all about defining the term serverless. At this week’s event, we saw the conversation shift away from basic explanations of what serverless is and more towards the instrumentation and tooling. It’s a positive sign for the maturity and acceptance of the technology, as Red Hat’s Ryan Scott Brown pointed out to us during the event.
Amazon Web Services’ Randall Hunt demonstrated how AWS’ Rex-Ray, a distributed tracing system, could be useful in tracking actions across different AWS services, including Lambda. CloudReach’s Linda Nichols argued that the natural evolution for serverless is towards frameworks that can be used with multiple serverless providers. And PureSec CTO Avi Shulman demonstrated how, despite the diminished footprint of serverless deployments, the technology still had much surface area that an attacker to could use to disrupt a system. -- Joab Jackson
Containers and serverless are not mutually exclusive, but for some, choosing between the two technology options is definitely part of the adoption decision. Asanka Nissanka from Sri Lankan-based startup ShoutOUT wrote in June on the Serverless blog about challenges using containers for their messaging platform. One of the particular concerns for ShoutOUT was that they had unpredictable traffic spikes and idle times that may be affected by customer ad campaigns, seasonal commerce, or other drivers. “We could have scaled our ECS environment by adding more container instances and multiple service containers, which we did try… Our primary hurdle is we’re running a SaaS business, making cost a critical factor. This solution was not appealing,” wrote Nissanka.
Taggart believes that, more or less, those who will make the switch to containers have already done so. “Containers and serverless are not incompatible. If need more than 512MB, you are processing big batches, or you need to run compute for longer than five minutes, then serverless is not the best suited,” explained Taggart. “Companies you think of as leading edge, they are on containers already. But there is an easier entry point for serverless, it is not necessarily that serverless is the better teacher, but that serverless fits those others really well, it doesn’t need a bunch of reliability engineers to manage clusters.”
Now Taggart is seeing a clearer delineation emerge now of container-based and serverless-based adoption and industry architecture choices, but does also concede there is some overlap (mostly due to the size of the task, as Taggart has spelled out).
Taggart indicates those industry sectors where serverless is fast becoming the first option. He listed: “The four industries that are moving fast and aggressively in serverless are:
- retail, especially e-commerce
- logistics, transport and rideshare
- finance, banking and some trading, and
Taggart is finding the divide between those adopting serverless and those choosing container architectures is becoming more apparent from his viewpoint. He believes the main divide will increasingly be determined by whether an enterprise has a long bench in their ops teams.
“What we have seen in the market, is that the people who are doing serverless, they all built their own in-house tooling in order to be successful, in order to meet their operational requirements,” said Taggart, pointing to Capital One and Nordstrom. “They saw a lot of value in serverless. The headline is cost and better utilization rates of paid-for infrastructure, but there are other compelling reasons: performance improvements, time-to-market, developer efficacy and efficiency, and the event-driven nature of the architecture.”
Taggart says that apart from these tech adoption leaders, the bulk of the market is not that motivated to invest “thousands in engineering to build the tooling.” He sees that for the ongoing maturity of serverless and production-level adoption in the enterprise, that’s why Stackery and other startups like IOpipe will be needed.
“In the enterprise, there is a delta between how much devs want to use it and how much they can use it.” Taggart questioned whether devs could get approval to introduce serverless in the enterprise if there are limited collaboration and monitoring tools to go with it.
And with the emergence of monitoring tools in the serverless ecosystem, that makes more widespread enterprise adoption particularly enticing.
TNS managing editor Joab Jackson contributed to this article.
Feature image: Nate Taggart (Left) and Chase Douglas, Serverlessconf 2017.