Containers and Compliance: Building Secure, Automated Systems on Amazon Web Services

Is it possible to use containers and maintain PCI, HIPAA, HITRUST, FedRAMP, or other compliance requirements? This is a question we hear a lot — especially from companies in healthcare, finance, and other highly regulated industries that want the flexibility and scalability of containers, but have existing complex processes to maintain compliance.
The good news is: Yes, you can build a compliance-friendly AWS environment using containers. Nothing about Docker containers is inherently incompatible with PCI or HIPAA. And you can build a compliance-friendly AWS environment using orchestration tools like Kubernetes, AWS Elastic Container Service or Elastic Kubernetes Service, and Docker Swarm.
But as with most things in compliance, it’s how you configure those services that counts. And the hard truth is that many of your current security tools and processes will have to change.
The Challenge with Containers and Security
Most companies have an existing suite of tools to tackle compliance. Your team is familiar with these tools and you’d prefer to keep using them.
The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most “traditional” monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.
This means you must apply creative alternatives to meet your internal security standards. The good news is that these challenges are by no means insurmountable for companies that are eager to containerize.
Here are a few of the most common tooling stumbling blocks, and how to overcome them.
Monitoring and IDS
In most cases, you can satisfy this requirement by installing your monitoring and intrusion detection systems (IDS) on the virtual instances that host your containers. This will mean that logs are organized by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.
Consider installing monitoring and security tools on the host, not the container.
Incident Forensics and Response
Every security team has developed a runbook or incident response plan that outlines what actions to take in the case of an incident or attack. Integrating Docker into this response process requires a significant adjustment to existing procedures and involves educating and coordinating governance, risk management, and compliance (GRC) teams, security teams, and development teams.
Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.
Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerization for incident response and work to develop new runbooks. This is a solid explanation of how incident forensics will have to mature in the age of containers. Long story short: it’s more complicated, but it can be done.
HTTPS and SSL
Both Docker Swarm and AWS Elastic Container Service (ECS) by default automatically use HTTPS to protect all API communication. However, in Kubernetes, that’s an additional item that you have to configure (although many commercial distributions of Kubernetes will cover SSL for you).
Patching
In a traditional virtualized or AWS environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub.
A Docker image has two components: the base image and the application image. To patch a containerized system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerized) would make this fairly simple.
One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully.
Get Ready for Audit Time
The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and it helps if you have few external parties that can help you build a well-tested, auditable Docker-based system, like Logicworks.
That said, risk-averse companies are already experimenting with Docker and this knowledge is already trickling down to auditors. Soon it will be the norm.
Feature image via Pixabay.