Modal Title
Serverless

12-Factor App #2/#3: Serverless Dependencies and Configuration

Mar 29th, 2019 10:01am by
Featued image for: 12-Factor App #2/#3: Serverless Dependencies and Configuration

Stackery sponsored this post.

This series of articles explores what we need to do to follow the design requirements of a 12-Factor App for ease of development and maintainability. Check back each week for future installments.

Toby Fee
Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

I had coffee with a friend and colleague this week. She’s in the middle of a complete re-engineering of a rather old software-as-a-service (SaaS) offering (old in the SaaS sense, it was new eight years ago). I asked her right at the outset if she wasn’t moving some of this stuff to serverless microservices. Her answer was eye-opening.

This is what she said: “Right now, I’m worried that serverless is frustrating to develop on and hard to maintain in production.”

This sounded almost backward to me. Serverless is about removing headaches, offloading concerns to a trusted vendor, having more time to hack on features, right? Right?

But here we come to the basic idea I talked about when writing about key decisions for your serverless app: no two serverless apps are identical, and the design decisions you make greatly affect how hard or easy you make your developers’ lives.

Serverless should be a choice that makes the dev experience easier not more difficult, following these guides can help.

The indispensable Chris Munns wrote last year about which of the 12-factor methodologies apply to serverless:

From “Applying the Twelve-Factor App Methodology to Serverless Applications,” which was the seed for this entire series of articles.

I initially planned to write an article about each of the six to seven factors that still apply, but some of the writeups were really short, such as for factor 2. So, I’ll be lumping a few together. Here we arrive at two that are closely related: dependencies and config.

Factor 2: Explicitly Declare and Isolate Dependencies

We still see a few web apps in the wild that need an install of a C++ ML library for image recognition or make a few quick curl commands in some long-running task. These apps require software to be installed on the machine running them, and if you try to run them on a clean and minimal OS, they will fail (a server without curl? It *has* happened).

But since the advent of Rails, it’s become much more standard to have a single point where all of an app’s dependencies are listed. With containers and then with serverless, this is an absolute requirement. No clever Ops wizard can sneak onto your container and install stuff not listed in the dependencies. Serverless functions like Lambdas just return errors if you’re trying to use packages not listed in their dependencies.

About the only way to fail this requirement for a serverless app is to have it make some de-facto console commands mid-execution. I’m really only familiar with this behavior as the result of malicious code execution (see Jeremy Daly’s write up on this). But to put it simply: don’t run console commands from within your Lambdas; you’ll give your team headaches and possibly fracture reality.

Factor 3: Store Config in the Environment

Before we go any further: what is config? Config is what changes when you go from development to staging, to production environments. Any code or settings you need to change to move your app from one server to another is config.

This rule is less about the exact way that your config is stored than it is about how not to do it. That is: you should not store config in your codebase, either in the middle of your code or in separate “config files” somewhere on the server. Sam Goldstein has an excellent review of the mistakes people make trying to store secrets for your serverless app.

The ideal 12-factor app stores config in environment variables. These are easy to change between deploys without changing any code; and unlike config files, there is little chance of them being checked into the code repo accidentally.

Using this UI is fine if you´re in the “my first Lambda” stage but after that no es bueno.

For AWS Lambdas, there is a UI to let you add environment variables that will be available to your code when it runs, but using this UI breaks this design principle for two reasons:

  • Your Lambda is no longer portable. To try and run it in staging (and access the Staging DB) you’ll have to edit these variables;
  • In most practical ways the fact that this config even exists is hidden, there’s no change tracking on these variables, they’re not a clear a dependency anywhere and the fact that they need to be updated when, say, DB table names are changed,

AWS does have internal tools to handle this in a more regular way, including their relatively expensive AWS Secrets Manager.

Stackery also has a clean environment config tool that lets you store environment config separately from your stack’s codebase and reuse environments on multiple configurations.

What’s next?

We get to some real meat in my next article in the tour of 12-factor app design with “Backing Services.” So much of serverless is focused on functions when the environment is much richer than that. Look for the third article next month!

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.