Modal Title
Cloud Services / Serverless

The Developer’s New Role in 300 Serverless Environments

The article explores some of the possible environment management patterns that can address serverless sprawl, and how you can manage things like keys and secrets across your team without giving yourself massive environment-management headaches.
Feb 13th, 2019 6:00am by
Featued image for: The Developer’s New Role in 300 Serverless Environments

Stackery sponsored this post.

Toby Fee
Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

Essential to any real-world development process is the concept of an “environment,”  the collection of configuration, keys, settings and files that allows the same body of code to run in multiple places from the developer’s laptop (the “local” environment) all the way to dozens of machines running the code in parallel for end users (the “production” production environment).

Serverless is a new landscape for environment management, and the root cause is the death of the local environment. While teams at AWS work diligently to let you replicate lambdas and other components of the serverless stack on your laptop, the reality is that the old process of “write it on your laptop, push to Master, and let Ops worry about how well it deploys” just doesn’t make any sense in serverless. By stitching together multiple managed service building blocks, developers are, by necessity, getting more involved with how their code actually deploys.

I’m going to explore some of the possible environment management patterns that can address this, and how you can manage things like keys and secrets across your team without giving yourself massive environment-management headaches. If you’re a lone developer writing your services by moonlight and sharing tools with no one, chances are this article isn’t for you. You can safely store all your environment’s quirks in your own memory. However, if you’re a small team struggling to manage serverless environments or a large organization looking at hiring a whole team just to do permissions management, read this guide first!

1)   There’s No ‘Environments’ Button in AWS

This article repeatedly refers to multiple environments when that concept really isn’t a “thing” within AWS. Notably, this has nothing to do with availability regions which are sometimes referred to as environments.

So, how do we implement these environments?

The Serverless Application Model (SAM) and AWS CloudFormation are key components of this process. With a SAM template, you can easily stand up two identical sets of serverless resource stacks with different environment variables. I’ll revisit these environment-variables and secrets soon.

If you’re interested in tooling that makes deploying and managing AWS serverless resources a lot easier, Stackery can do all that and it does have an environments button to make it easy to move your stacks around.

2)   Three environments? More like 300.

In all cases, the number of environments you’re going to provision with serverless architecture is going to need to increase. At the very least you’ll need a test environment of some kind, something to demo new services, production (can’t completely overlook profitability) and staging. Finally, you’ll need to configure an environment for developers to actually use for development.

Another critical reason to manage multiple environments is cost-management. At Stackery, we regularly end up having the conversation “why did X environment cost $200 this month?” Since we have many individual environments, it’s easy to see who created the stacks that are generating costs. If we only had two or three different environments, this could end up presenting quite the puzzle.

3 a) An Environment Per Developer

At Stackery, we see huge benefits in creating an environment for each new developer. This makes the most sense if we remember that what we’re losing is our local environment. It might not be that great an idea for Alice the developer to delete all those lambdas, but we want her to be in control of her environment to find out!

This means moving away from what many teams try early on: just a few AWS accounts with shared credentials for each. This too is a good thing, of course. Shared credentials inevitably leave one person in charge of management, meaning a “bus factor” of one. I recently noticed, for example, that my password manager had authentication details from a team I had left three years ago! Each developer needs to have an environment to get their code to a point where it “works” and compose the other services needed.

Self-service controlled environments and tools are the best practice for high-performance teams. Stackery makes it extremely easy to empower each developer to manage their space.

3 b) An Environment Per Feature

A powerful idea would be to create new environments as you pursue a major feature. This offers some benefits for collaboration: changes made to the other services in your environment can instantly be visible to the developers you’re working with closely. Of course with this pattern, if you’re not in close communication you can end up stepping on each other’s toes.

4) Use AWS’ Built-In Versioning

Both API Gateways and AWS Lambdas have features to specifically address the concept of versioning. API Gateways have “stages” to let you identify which release phase your API is in, and lambdas have “aliases” for function-versioning.

There are two major concerns with using either of these tools:

  • This versions a single part of your stack at a time and isn’t terribly cohesive. A re-used SAM template (see point 1) is more holistic;
  • Once deployed, Lambda functions are immutable, which can mess with any hard-coded; database strings or parameters. Evan Johnson wrote a great piece on secrets in AWS a while back, and more recently Sam Goldstein at Stackery covered the topic.

5) Not Everything Is Serverless

While it’s possible to build a web app completely with serverless tools, in reality not everything will be a serverless component. Serverless lambdas are capable of requesting outside information from outside URLs, but that’s a limited channel of communication, and without an API gateway, they can’t take requests in.

When it comes to grabbing data from your existing database or file stores outside of S3, you’ll need to feed your serverless application configuration and secrets to connect with outside services. This is a place where Stackery can be enormously helpful, since it modularizes your environment information, and makes it easy to deploy your functions and serverless resources (your ‘stack’) onto multiple environments and AWS regions.

Conclusions and Further Reading

Some discussion of the strategies for environments in AWS were really well explicated by Sergio Garcez in this Hackernoon article.

At Stackery, we understand the challenges of serverless in production because we’re working to build solutions for teams to adopt serverless at full scale and high velocity. Serverless offers huge velocity advantages and ease of use but the challenges are real.

The lack of a local copy of your environment seems like a major drawback of serverless. But I’ve seen many a highly competent team roll out broken code because ‘it worked on the machine’— but this isn’t a good indicator that code will work in production. Test environments, with their simplified structure and mocked up requirements, can obscure some errors, create others, and tack on time to every deployment.

The question, it seems, is not ‘how do you develop without a local copy of your app?’ but “did we ever have a local copy at all?” And the headaches of managing multiple environments are a fair price to pay for multiple environments all of which resemble each other much more closely.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.