Why Serverless Is the Only Way to Build APIs in 2018

18 Sep 2018 10:00am, by

Stackery sponsored this post.

There are three primary ways to create an API when it’s not logical or practical to create the API from within your own web app. You can build a service with a virtual machine (e.g. an AWS EC2 instance), stand up a container with your service — or build it in a serverless environment.

Here is why adopting serverless makes the most sense when creating APIs. 

Don’t Use Containers to Build Your API

Toby Fee
Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

Containers are the most puzzling fad of recent years. In certain cases, the ability to say “we can build new machines that are perfect replicas of the machine you built before” is kind of a superpower, and it unlocks some key processes — but public APIs rarely start with a need to spin up dozens of replicas, and this advantage doesn’t outweigh a number of difficulties.

Compared to virtual machines (VMs), containers are quicker to start and require fewer resources to run in multiples, but neither of these is very applicable to an API service.  Containers don’t start fast enough, in general, to wait to start until receiving an API request. We are left with their lower overhead versus a traditional VM, and here we come to a basic fact of development: no executives complain they can’t buy more RAM, but they are short of engineers. No one would write a line of Javascript if it were RAM or CPU cycles that were at a premium. Most technologies that see widespread adoption primarily save developers time.

One example of how containers save RAM at the cost of development time is the lack of solid management tools. It’s a single anecdote, but I never had trouble with the hypervisor interface for Amazon EC2 or Azure VMs. On the other hand, I’ve never become (or even met) a self-taught expert on managing Docker containers.

When confronted with some of the basic difficulties most web developers face with containers, the answer is often that “with a little training you can manage this or that quite easily” and that gets us to a fundamental problem with containers: years into their introduction, web developers still aren’t able to get things off the ground on their own. In general, when leaders talk about what resources are in short supply it’s a “deficit of human hours” and not technical issues that come up. A solution that requires more engineer time seems doomed to cause more trouble than it’s worth.

Don’t Use Virtual Machines for Your API

While my argument against containers was lengthy, my argument against using VMs boils down to one word: security. Indeed, the nightmare scenario with VMs is a service like a public API. Imagine something like the following:

  • Your team is asked to put up a public API to help build a potential partnership with a parallel service;
  • After development, months or years go by with only moderate community interest in the endpoints, and all the developers in the company move on to other stuff;
  • In that time, new vulnerabilities emerge for the OS of our virtual machine, but since the public API isn’t anyone’s “job” full time these updates either don’t happen, or if the hypervisor service forces them, they have to be rolled back when there’s no one available to figure out why they broke the service;
  • After another year or two you get an email from a hacker explaining how they have a full clone of your production database, via a security hole patched long ago but never applied to your API’s VM.

The problems here are obvious, but the solutions aren’t as clear: a heavily managed VM gets us to something that looks a lot more like serverless, and trying to migrate the service to a more modern machine image can require serious developer time. Worse it’s hard to know when that has to happen, so you can end up with some VMs in your stack that are very old indeed.

Why Serverless Wins

Serverless is “leapfrogging” the trend to containers. Many new developers are learning serverless after the barest crash course on managing virtual machines in a highly abstract environment like Heroku.

Serverless offers an environment where updates and security holes are officially “not your problem” and you can adopt an attitude of ‘if it ain’t broke don’t fix it’ to services that have worked reliably for a while.

Finally, the use of single functions (in AWS they’d be Lambdas) to handle single routes means the danger of leaking data via your API will go way down. Serverless may not offer superior resource usage, pricing or ease of replication, but none of these are deal breakers especially when building a public API. At Stackery, we have specifically made it our mission to address many of these problems, making it easier for developers to get serverless applications up and running quickly.

For internal services, mission-critical projects, and distributed systems, arguments can be made for almost any extant technology. In the case of building APIs, it’s very hard to find a winning argument for any solution other than serverless.

Feature image via Pixabay.

A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.