As containers become the commonplace method for delivering and deploying applications, we’ve seen more of our customers taking a “lift-and-shift” approach to migrating their existing applications. This way, they can immediately begin reaping some of the benefits of containers, such as greatly simplified testing and deployment, without having to reconstruct the entire application as microservices.
Few technologies make their way successfully into the mainstream without demonstrating both new capabilities and backwards compatibility. So, it should come as no surprise that customers are containerizing the J2EE and .NET apps they’ve invested tens of thousands of hours creating and maintaining, with minimal changes.
This doesn’t mean that they’ll never be rearchitected as microservices. But there are immediate benefits to be realized from just moving them over as-is, in terms of test automation, deployment, density, efficiency, and as you’ll see, security.
Maybe this pattern looks familiar: When the public cloud was first established, organizations often migrated entire VMs from their own data centers to AWS without change. Prior to that, when virtualization was new, organizations would convert physical servers into VMs without change. In both cases, these organizations’ legacy environments became easier for them to manage. This freed up time for them to build up experience and momentum, while at the same time preserving their existing investments. Seeing this trend repeat itself with containers is a good indicator of the growing momentum of the container ecosystem.
At Twistlock, one of our customers provides environmental science and engineering consulting to some of the world’s largest civil waterworks projects. They have the typical data collection, modeling, and other core line-of-business applications, but also a critical app that models storm surge.
This application is used in projects involving the U.S. Army Corp of Engineers and other government agencies. It’s built on Red Hat, Oracle, and Java platforms, and features a lot of proprietary business logic. Some twelve years ago, this app ran on bare-metal IBM xSeries servers in a data center. Then it was transferred to a VMware ESX VM, and later a VM for Azure. It was the very fact that the app could stay the same, that made these transitions feasible.
But maintaining consistent dev and test environments for this frequently relocated 14-year-old storm surge app was a constant struggle. Its components became tangled myriad messes of both old and new dependencies simultaneously. Individual components were installed separately and at different times, with specific configurations that inevitably generated conflicts. Implementing disaster recovery required operations teams to manually follow a step-by-step runbook.
Moving to containers delivered immediate benefits: First, the consistency between staging environments resulted in greater productivity. Deployment was boiled down to a simple docker run command. Disaster recovery became a matter of simply repeating the initial deployment process.
Then there were the security benefits.
This app requires strict adherence to specific security configuration and compliance controls, which must be checked and verified manually. Traditional security tools like intrusion detection and prevention systems (IDS / IPS) and host firewalls required significant manual design and debugging. So whenever one of the components was upgraded (or, as can sometimes be the case, downgraded) the security systems quite frequently broke down. Rather than risk another breakdown, the organization opted to loosen its security controls.
Once the application was Docker-ized, Twistlock helped this organization simplify and reconstruct its security, particularly in three key departments:
Before Twistlock, the dev team had limited ability to check for vulnerabilities in some libraries, but they never had full visibility throughout all layers of the stack, top to bottom. Vulnerability scans were only conducted after the app was built, so any problems that cropped up had to be manually remediated. The ops team had to rely on human checks and processes to prevent vulnerable builds from being deployed, and had no visibility into their applications’ vulnerability posture until circumstances prompted them to perform point-in-time scans.
Now, the dev team uses Twistlock’s plugin for Jenkins as part of their build process, and can see all the vulnerabilities across the entire app on a single pane of glass. And now, the ops team has implemented a threshold which prevents new versions of components from being deployed if they contain vulnerabilities with CVSS scores of Medium or higher. Both dev and ops teams use Twistlock’s Vulnerability Explorer for a real-time view and stack ranking of vulnerabilities across dev, test, and prod environments, so they can identify urgent problems early.
Because compliance had typically been considered an operational concern, the problems that inevitably cropped up in applications, once deployed in production, required the dev team to mitigate. Meanwhile, in verifying new builds, the ops team relied upon manually-created documentation. So there was little versioning control at the component level. And the internal audit team ran manual tools to check compliance on a point-in-time basis, with no ongoing assurance.
After Docker, the dev team uses Twistlock’s Jenkins plugin to check the compliance posture of every image on every build, enabling them to find and correct problems early on. The ops team can centrally define compliance rules in Twistlock that monitor and enforce all the settings relevant to their specific needs, and automatically block the deployment of images that are non-compliant, or acquired from anyplace other than the official, trusted repo. Ops and internal audit teams use Twistlock’s Compliance Explorer feature to continuously monitor the state of all hosts, containers, and images across the environment, to ensure they’re adhering to best practices specified by the U.S. National Institute of Standards and Technology’s Guide to container security (NIST SP 800-190).
Before containerization, in order to protect the storm surge application at runtime, the organization was simply unable to supply the manual effort required to tune intrusion detection and intrusion prevention system rules, or update its comprehensive, specific network defense policy, for every build of the app. Instead, they relied primarily on border firewalls and some limited, host-based anti-malware capabilities to protect the app at runtime.
Now, Twistlock automatically creates a runtime model of every version of the app, every time it’s deployed. This model describes what the app should do across four dimensions: process, network, file system, and system calls. It’s correlated with the image digest, so every build has its own automatically-created, specially-tuned model.
Whenever a developer makes a small change to one of the binaries, the automatically replaced model for resulting image will include the MD5 hash of the new binary, enabling the prevention of execution for any images that fail the hash test. The model automatically learns all the ports the app listens on, the specific versions of each binary bound to each of those sockets, and the characteristics of outbound network behaviors, such as egress IPs and DNS namespaces. This leads to more and significantly stronger layers of defense-in-depth at runtime, all while removing manual effort.
Not only is this customer now able to run multiple instances of the storm surge app without conflict in the same VMs — thus lowering their Azure expenses it has immediately improved security in tangible, practical ways. They have greater visibility into the vulnerability posture of the app throughout its lifecycle.
Migrating traditional apps to containers may not deliver the end goal. But it’s a great first step that delivers real benefits, builds operational knowledge, and accelerates the greater transition to a fully containerized infrastructure.
Twistlock is a sponsor of The New Stack.
Feature image: A high spring storm surge near Port William, at Dumfries and Galloway, UK, by David Baird, released under Creative Commons license.