Octopus Deploy sponsored this post.
It is a simple fact that your deployment environments are not exact replicas of each other. The most common difference between environments are credentials used to access services like databases, as it would be considered bad practice to share the same passwords between your test and production environments. Hostnames used to expose services also commonly differ between environments.
This means your applications require the flexibility to adapt to the specific details of each new environment they progress through in a deployment life cycle. This allows details like database connection strings or external service URLs to be configured as needed.
The generally accepted solution for creating environment-agnostic deployments is to expose all environment-specific settings via externally configurable options, typically through environment variables (env vars). This is the solution detailed by the 12-factor app methodology.
As a consequence, the vast majority of advice you’ll find on the internet today strongly advocates for environment-agnostic deployment artifacts (whether Docker images or more traditional packages like ZIP files) that are configured at runtime using env vars. However, this advice is not universal, and in this post, we’ll look at why environment-specific Docker images are a valid solution to a number of common deployment concerns.
Understanding a Traditional Deployment Process
Octopus started life as a deployment tool for more traditional platforms like IIS (Internet Information Services) and has always embraced the notion of progressing environment-agnostic deployment artifacts. To support environment-specific configurations, values in configuration files are substituted by Octopus during the deployment process to each environment, with the modified package then deployed to its destination.
This variable substitution feature is easily one of the most used features in Octopus, and the concept has stood the test of time.
What is interesting about this approach is that it effectively means an environment-specific package is delivered to the destination. As you can see in the diagram below, a unique package consisting of the original deployment artifact and the environment-specific settings is deployed in each environment:
The important things to take away from this workflow are:
- This process is well established and battle-tested.
- A single environment-agnostic artifact is the input to the deployment process.
- Environment-specific artifacts are created, even if they are an opaque side effect of the overall deployment process.
With this traditional deployment process in mind, let’s look at what environment-specific Docker images might look like when generated as part of a multi-environment workflow.
Environment-Specific Docker Image Workflow
The process of deploying environment-specific Docker images looks like this:
The only difference between the traditional deployment workflow and one that incorporates environment-specific Docker images is that the environment-specific Docker images must be stored in an environment-specific Docker repository rather than being opaque files copied directly to the target during deployment. Otherwise, the two processes are effectively the same.
If the traditional deployment process is well established and battle-tested, there is no reason to believe that swapping a ZIP file for a Docker image introduces any significant pitfalls. This is an important comparison to make, as many of the arguments against environment-specific Docker images tend to invoke unspecific claims of fragility or undesirability, when in reality there are undeniable parallels between traditional deployment processes and environment-specific images.
Now that we can see how environment-specific images mirror a well-established and battle-tested process, the next question to ask is when they are a better option than externalizing all environment-specific configuration as env vars.
Environment-Specific Images May Not Be the Best Solution
It is important to acknowledge up front that environment-specific images are not the best solution in most circumstances. All modern frameworks have excellent support for externalizing configuration values through env vars, and every platform that hosts Docker containers supports setting env vars. If your code can read env vars, then that should be the default option, as env vars are a robust solution to the problem of environment-specific settings.
That said, you should be careful not to use env vars as a golden hammer to solve all problems, and there are many scenarios where environment-specific images make sense.
Migrating Legacy Applications
An obvious use case for environment-specific images is migrating legacy applications that rely on manipulating configuration files during deployment.
You don’t have to look far back into the history of Java or .NET applications to find XML configuration files that have no concept of reading env vars. These applications are well supported by the traditional deployment process that edits config files directly, and the most direct migration path for these applications to Docker is to create environment-specific images with a new layer overwriting the original config files with environment-specific versions.
Bundling Scripts with Executable Images
Docker is fast becoming the universal package manager for CLI-based tooling. Every major tool has a supported Docker image, and Docker registries provide standardized processes for downloading images. DevOps is all about reliability and automation, and the ability to run “docker pull” against any image is a much-appreciated evolution from having to locate download URLs for tools, downloading the package, extracting it and digging around in the resulting file structure for the appropriate file to execute.
More advanced tools hosted in Docker images that execute scripts or read complex configuration files function by allowing the end user to mount a volume or file. Examples include newman, the CLI tooling for Postman, which provides instructions for running local collection files. Cypress also provides a blog post demonstrating how to mount a directory containing the test files.
Mounting files and directories is trivial when running Docker images locally, but it becomes challenging, if not impossible, when running these same images on hosted platforms like Kubernetes, ECS, App Runner, Azure Container Instances, etc. If these platforms support file mounting at all, you will often have to provide a cloud file storage solution to reliably mount files in your chosen container orchestration platform. It is not unreasonable to question whether setting up an NFS file server is worth the cost and trouble for the sake of mounting a few kilobytes worth of files in a container.
Environment-specific images are a perfect solution to this issue. For example, you could create an environment-specific image based on the official Cypress Docker image and then layer in your end-to-end test scripts. The resulting image can be run by any platform capable of running Docker because environment-specific images are just plain old Docker images.
Environment-specific configuration may extend beyond simple key/value pairs. If we extend the concept of an environment to encompass the idea of tenants, then it is easy to imagine scenarios where “environment-specific configuration” extends to things like CSS files for tenant-specific websites. CSS is an example of a declarative syntax that was never designed to be templated from external values like env vars. While there is a way to do so, a far more realistic approach is to layer tenant-specific style sheets over a generic base image.
The 12-factor methodology requires a strict separation of config from code. A better way of describing this is a separation of config and customization from code. This kind of composability fits very naturally with environment-specific images, where environment- or tenant-specific customization is “layered in” during deployment.
Attempting to Customize Files in the Entrypoint Is a Bad Idea
A Google search for “docker envsubst entrypoint” reveals a wealth of advice on how to inject env vars into files as a Docker container is launched. On the surface, this appears reasonable, but it won’t scale well as the complexity of the configuration files increases.
At Octopus, we have seen how the desire to edit config files evolves. You start with a few simple substitutions. Over time, more and more files need to be edited, and naive substitutions force you to deal with escaping spaces and quotes depending on the target configuration file formats. Next comes the desire to inject values into structured files like JSON or YAML, which allows you to ship untemplated default config files that then have properties altered in a declarative manner.
Trying to bake this functionality into the ENTRYPOINT of your Docker image runs the risk of introducing bespoke and complex scripts dealing with character escaping or calling tools like jq or Augeas to modify files in a structured way.
On the other hand, environment-specific images have the luxury of performing this file manipulation outside of the Docker image using tools designed for exactly that purpose and then layering the resulting files into a new image. The deployment time concerns of creating environment-specific configuration files is left to deployment tooling, rather than trying to embed it in custom scripts executed by Docker when the container is run.
The process of creating environment-specific images is conceptually identical to the process used in a traditional deployment, which has proven itself to be a reliable solution when deploying to multiple environments. While using env vars to customize a container for an environment is an excellent, and arguably preferable, solution, there are many scenarios where environment-specific images make perfect sense.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker, Octopus Deploy, Postman.
Feature image via Pixabay