Stackery sponsored this post.
It’s 6 p.m. and the site is down. A late-afternoon deploy has broken a service, and the operations team is feverishly trying to figure out what broke. One of the SREs messages the manager of that service’s team:
SRE: Is it possible that the update required a specific version of redis?
MANAGER: our service doesn’t use redis.
SRE: well the errors we’re seeing look like it’s expecting “redis” and failing on that.
MANAGER: We don’t use redis at all — it must have been another team that broke queueing.
SRE: No other teams have released updates today.
MANAGER: Maybe “redis” was configured incorrectly… good luck!
In point of fact, the update introduced a new “redis” requirement, and installing “redis” on the affected virtual machines (VMs) solved the issue. The manager was unaware of this change.
This terrifying scenario isn’t limited to VMs, it affects serverless environments as well. In fact, if you’re not following the principles mentioned in my last article, changes to your serverless functions’ “config” can break your services in a way that is extremely hard to troubleshoot. This brings us to principle four of the 12-factor app:
IV. Backing Services: Treat Backing Services as Attached Resources
This is, of all the principles, my favorite. It’s neither a clear prohibition against doing something nor an absolute requirement. Instead, it’s more a guideline around flexible and robust linking that ensures your app can withstand big changes and small crises without interruption.
To quote directly from the 12-Factor App: A backing service is any service the app consumes over the network as part of its normal operation. Examples include datastores (such as MySQL or CouchDB), messaging/queueing systems (such as RabbitMQ or Beanstalkd), SMTP services for outbound email (such as Postfix), and caching systems (such as Memcached).
I’ve been saying for some time that we must consider a serverless app as more than functions. A serverless app on AWS is more than just Lambdas. But what is the nature of these connections? Should a database be tightly connected to the code in our functions? That is, should the same function code, drawing from a test database instead of a dev database, be considered a whole new app? It seems like the answer is “no,” but we do want to consider our app as a whole, without completely disregarding the backing services.
“Attached resources” implies a flexible connection between application code and resources, where:
- Resources are provisioned with the application code;
- you can swap out resources without having to rewrite any application code.
The most obvious scenario where you need to do the latter is moving from dev environments to staging and production, but another frequent case would be a particular resource failing and needing to be replaced. While it’s unlikely that you’ll have this problem within AWS, you may need to include services outside the AWS cloud, or need to swap something out in pursuit of lower costs.
Another possible scenario is a canary pattern, where you’re trying a different architecture with a small percentage of your traffic. AWS has some inbuilt tools for this (thanks to Yan Cui for pointing this out in his awesome course on production-ready serverless). In this case, we’d want to be able to send some traffic to a different lambda with the same backing services.
How Do We Follow This Principle in Serverless?
This is my favorite principle to write about because by default this is something serverless does pretty poorly. By hiding config within the AWS console, and relying on console config to connect lambdas to resources, AWS Lambdas often need major re-working to switch out a backing service.
The most important thing to do is to use some tool to see our entire app (sometimes the term “stack” is used here) as a single entity. Backing services like DynamoDB, queueing, and API gateways need to be considered together in a single config. Until recently a number of services tried to bolt this feature on to AWS, among them apex.run, claudia.js, and Serverless Framework. Now AWS has released its own open source standard, the Serverless Application Model (SAM) that works directly with CloudFormation to handle your whole app as a single configuration and streamline deployment.
This an area where Stackery can make life easier, by showing you your whole app within a graphical canvas.
This helps you see the app “as a whole,” but how do we make sure we can change out resources without changing our app’s code?
Making Backing Resources Swappable
For this I would point to the careful engineering of environmental config I spoke about in my last article. But beyond having environment variables you can easily update, this design principle also affects how you write your serverless code: Ideally your app code should look to environment variables whenever its referring to code outside itself, even if it’s something like a service URL that you don’t think should ever change. For example: in its early stages you probably want new signups or user errors to go to an alerting system and a Slack integration. To follow this principle closely all those service urls should be environment variables.
The result should be that if you’re looking and changing a backing service, the necessary points to update should be super visible as environment variables, and much easier to change.
Adding to AWS to Get Help
This is also a great side benefit of implementing Twistlock for serverless security: if config for backing services is showing up in your source code, you can get notified of it or even prevent the code’s execution. This improves security and makes your app more flexible.
My own team at Stackery has added tools that manage environment config and make environments portable. You can use the same environment with multiple stacks, saving effort and ensuring consistency.
Feature image via Pixabay.