CircleCi sponsored this post
There’s plenty of positive buzz around serverless architectures and how they can help development teams cut costs and offload management tasks. There are also misconceptions about what it can do, and when and how to use it. To dig into the practical uses for serverless architectures, Rob Zuber, CTO for CircleCI, spoke to Nate Taggart, CEO of Stackery, which offers a serverless operations console that helps developer teams build serverless applications. Below are some of the highlights from the conversation — to learn more, watch the video.
Rob: Can you give us a quick overview of what it means to be serverless?
Nate: I think the best place to start is to acknowledge that “serverless” is the world’s dumbest name. There are servers — your code is going to run on servers.
Amazon has a great way to phrase this: You outsource the undifferentiated heavy lifting of managing servers. The idea here is that you focus on your application, while you ship your code to the cloud provider with a serverless solution. They handle the scaling, availability and orchestration, while you focus on application development and delivery.
Rob: Where should people get started with serverless computing? What are the easiest applications to model in a serverless world?
Nate: We see most companies start with low-visibility and -criticality workloads to test the waters. It’s almost always a background task — something like a cron job script that runs once a night or week and is on an AWS EC2 instance. This is a great entry point for your first serverless application.
What will happen with the serverless application is that your function will start on demand. You’ll pay for the few seconds or minutes that it’s running — then it’ll shut off, and you won’t be paying for the rest of the time. The cost savings are what people point to, while the real advantage here is not having to manage the infrastructure that these little services are running on.
Once people start with these tasks, they look at higher-visibility and more mission-critical applications for serverless. Typically the path forward is using serverless applications to back an API or as part of a microservices pattern. Instead of deciding to rebuild a whole application so that it can run on serverless, you can take little components of the application and migrate them to serverless.
Rob: I think it’s hard for people to conceptualize serverless for something like APIs in the same way they can see the value for a cron job. They don’t think of an API as a function, since it’s always on and always listening. What’s the mental connection that people need to make if they’re going to think about serverless for APIs?
Nate: I think we’re talking about functions in a few different ways here. You have your code, your building blocks and your components. You’ve written a code function, and now we’re going to want to run that on demand. This doesn’t seem terribly confusing. But what gets a little tricky is that these ephemeral compute instances can encapsulate more code than a single literal code block.
What you can do is build an application and then run that application on-demand. We can decide that when an API gets hit, it’ll trigger a code block to run and that’s the function-as-a-service. We can also get more sophisticated. We can build an entire application, ship it into whatever compute function we want to use and then decide that when the API endpoint gets hit, we’ll route it through the application — just as we would with a long-lived server application. Then we’ll select the right code block to run within the code.
Rob: What are the misconceptions about serverless? What ideas about serverless don’t quite line up with what it offers?
Nate: Serverless is certainly high up in the hype cycle right now and there’s a lot of buzz. As people start to embrace it, we hear about some common misconceptions. One of the biggest ones is that serverless is no ops and you don’t have to manage anything on the infrastructure side. That’s just patently untrue. There are certainly some ops responsibilities that you can outsource. You no longer, for example, have to manage orchestration for the application, while things like availability and load balancing are happening under the covers as part of the managed service.
When we talk about serverless, we’re really talking about compute — and that’s it. But no real application is being built with only compute. You’re going to have dependencies, third-party services, internal services, data stores and networking needs. These are all pieces that you still have to manage.
Rob: Who shouldn’t use serverless? What are the bad-use cases?
Nate: One of the obvious cases is where you have a long-running and very predictable workload, where every day, you have a three-hour batch job. If that’s the case, and you’re good at spinning up a server, running it and then shutting it down, that’s frankly probably a better solution.
If you need a lot of resources like high memory and lots of disk space, serverless is probably not a great approach. But If you can’t predict the volume, and you’re doing lots of small transactional workloads, then serverless is really good.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real.