Modal Title
Cloud Services / Serverless

Serverless Development Is Broken — Here’s the Fix

It’s very hard to write code for serverless applications. Here’s why.Serverless is, in part, an operations innovation. And for operations teams, it can make workflows a lot easier. But currently, serverless provides a poor experience for the people who have to write the code to make it work. This article explores what’s wrong in detail, and suggests ways we can fix it.
May 16th, 2019 2:49pm by
Featued image for: Serverless Development Is Broken — Here’s the Fix
Feature image via Pixabay.

It’s very hard to write code for serverless applications. Here’s why.

Serverless is, in part, an operations innovation. And for operations teams, it can make workflows a lot easier. But currently, serverless provides a poor experience for the people who have to write the code to make it work. This article explores what’s wrong in detail, and suggests ways we can fix it.

A Horror Story Called ‘Serverless Development’

Let’s start with a simple scenario: You’re making your third serverless stack. On your first stack, you pasted code into the Lambda dashboard. For your second, you got a bit more professional and started zipping up your source code before sending it to AWS. Now you’re using the process most professional teams do: deploying your Lambda along with other needed backing resources through AWS CloudFormation.

Now, a simple template controls all your resources, and you can add a bunch of new resources — great news! But here the trouble starts. You wanted to start and stop Fargate tasks with your Lambda, but CloudWatch is showing an error message:


That must signify the name of the database you were querying, right? You thought it only needed the ID, but you set the name in your Lambda code, then you update your codebase zip and hit deploy on CloudFormation. You wait 10 minutes for this code to deploy.

Toby Fee
Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

Okay, exact same error, that didn’t work. You do some searching and figure out you needed to add a container name. You update your code. Deploy again… 10 minutes of waiting go by.

Okay now it’s been 20 minutes on this error, and you’ve just about got it handled. But then adding .wirhName()  gets you a new error message. You curse loudly, realize you mistyped .withName() and hit deploy. This time you try sending requests to the Lambda before CloudFormation says it’s ready. Guess what, it wasn’t ready, 10 more minutes of waiting go by.

All right we’re back in business… now the error is MISSING (cluster ID). After some more googling it turns out were using the name formatting incorrectly. You fix it, wait 10 minutes, and the Lambda works.

How many bugs can you fix in a day, how many features can you release, if solving a single request formatting error takes you 40 minutes?

Wait, Is Serverless Bad? I Thought It Was… Good?

Serverless is good. In the specific area of “hacking away at application code until it works” serverless is weak right now. That little cycle of ”‘write, run, oh h*ck that’s not right, re-write, run again!” — the inner loop of development isn’t — great.

But we love serverless, and we love serverless because everything after your code is written is much much better. Need a new database table for your new feature? How long does that take in your current org? With serverless, you can have a whole new database up and running for your dev and test environments in minutes. Horizontal scaling is less a “hard problem” and more a “drag a slider.” Moving our application from dev to test, to prod and maybe back to dev as problems show up, is significantly easier with serverless. The outer loop of architecture and deployment is vastly improved.

Fast Response Times, Slow Deploys

It takes time to get your code up to a serverless function so that you can actually exercise it, and that will kill your development process. Sure, development is a lot more than the number of commits you can push in a day, but the example described above is a massive difference in pace from working on your laptop.

The first time I found myself waiting a full eight minutes to deploy code which was, in its entirety console.log(payload), I realized we had a serious problem.

Why is anyone using serverless if this is so much more difficult? Again serverless is an incredibly useful ops tool. It might be harder to write your application code but, once it’s written, you don’t need to worry about horizontal scaling, internal message queueing or any of the thousand other things that your platform handles for you.

It makes sense that operations and your own team’s management are excited about serverless! But if the process of writing code remains this difficult, half your dev team could quit in frustration within the first year.

The Map Is Really Not the Territory

The alternative, and what most developers are doing day-to-day now, is working on their serverless function code locally. And if you’re using AWS Lambdas, there’s even a tool for that. Problem solved, right?

Well, serverless is more than Lambdas. Sure, you can run your Lambda code locally with a local API gateway to access it. But where is its database? Where are the long-running task containers? And what about the resources necessary for the Lambda to actually do anything?

Here’s where most of us turn to a mock version of these services; but how do we write code for dozens of AWS resources that we’ve made crude mocks of and have any hope that our code will actually work in the cloud? In my example above, the error we were hitting was specific to the Fargate API. How could any mock simulate that? Sure we could write an automatic ‘OK’ response but would our mock really include error handling?

Worse, wasn’t serverless supposed to eliminate arbitrary heavy lifting? Weren’t we supposed to be able to focus on business logic? How is writing a fake DynamoDB to respond to local requests anything but arbitrary and heavy?

There Is a Solution

We need a cloud/local hybrid development environment. One that iterates our code at the speed of our local laptop, but can still reach out and interact with the whole AWS cloud.

This shouldn’t difficult to imagine: we need something that runs Lambda code locally, then reaches out via a web request for the other parts of our AWS stack. Of course, it needs to tunnel correctly so that it can reach components within an AWS Virtual Private Cloud (VPC) but even so it seems like a solvable problem.

Well, guess what? At Stackery, we have a first release of a tool to do just that. Stackery is free for early developers and offers cloud-local development as part of its deployment and stack management toolkit.

Let’s look back at the same example above:

  • Use the stackery CLI to start your Lambda locally, send it a request
  • Your Lambda, running locally, can send requests straight to AWS and ping the Fargate container, but it gets an error;
  • Add .wirhName() and go back to the console, good news the Stackery toolkit already restarted your Lambda, wait 0 seconds to try your request again;
  • Aww, snapple! Now a new error! This time, it’s returned from your local Lambda container, oh right, typo, change it to .withName() and head back to the console where the Stackery toolkit restarted your Lambda as soon as you updated the code. Wait 0 seconds to retry your code… you get the idea.

With the ability to iterate Lambda code as fast as you can write it, Stackery’s cloud-local tools can give you the developer experience that serverless is sorely lacking. A world where writing code for serverless is as effortless as running that code on serverless architecture already is? That reality seems closer than ever.

Over on the Stackery blog, Sam Goldstein dives in to building an ideal serverless workflow complete with CI/CD.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.