When I saw the emergence of containers about five years ago, one thing was clear: If — and that was a big if back then — if containers became a thing, they would change how organizations would operate their applications in production.
Fast forward to today, and this observation is not earth-shaking. Early adopters of containers have discovered this in spades (and sometimes the hard way). What’s more interesting is how radically containers and microservice architectures have changed operations. I’ll go so far as to say that containers are enabling the convergence of security and monitoring capabilities, and as a result, accelerating the move to DevSecOps.
But let’s step back.
Organizations are moving to containers as a way to streamline development and increase the pace of innovation. A 2017 Forrester study found that out of the organizations surveyed, 66 percent achieved accelerated developer efficiency, while 75 percent of companies achieved a moderate to significant increase in application deployment speed.
With that said, containers are relatively new to the enterprise. At the June 2018 DockerCon San Francisco event, Docker noted that 50 percent of attendees surveyed had started with containers in the last year, indicating that the majority of IT professionals and the DevOps populations are still learning. By the time these newcomers are productive with containers, I suspect their pre-production plans will include significantly rethinking monitoring and security processes.
Let’s discuss why.
Containers are easier to create and quick to spin up because they are typically smaller and lighter weight than virtual machines.
95 percent of containers live less than a week and 11 percent of containers stay alive for less than 10 seconds.
However, the ease with which they are created, the ability to launch containers quickly through continuous development/continuous integration (CI/CD) pipelines, and the use of orchestration tools to scale and move them at will mean containers tend to be killed off and reborn very frequently. In fact, one study last year found that 95 percent of containers live less than a week and 11 percent of containers stay alive for less than 10 seconds. It’s great for developers — push code more frequently, innovate faster, and stay ahead of the competition. All good, right?
Not so fast. As you can imagine, this complicates the job of tracking the little buggers. And while containers, if used correctly, can improve your security posture, the sheer number of them, their distribution, and the black-box nature of containers forces you to rethink your risk and compliance profile. It’s unwise to instrument containers as if they were machines or virtual machines: you can’t put an agent in each one because of the overhead, and using code injection techniques is akin to injecting a virus in each container. Both methods are against the software design principles of containers.
In the old days, you could take a network-centric approach and watch everything that goes into and out of a machine or virtual machine to derive the “truth” of what’s going on. But given the dynamic nature of containers and the ability of containers to move across clouds, old network-centric approaches don’t work quite as well as they used to, neither for security nor for monitoring.
But what could replace the network as the source of truth? Much like containers themselves are created from features embedded within the operating system, it turns out that the best way to monitor and secure containers is to also leverage some primitives within the underlying system.
The operating system kernel can be that truth: the kernel never lies about what’s running on the system or what those applications are doing. That lets you see inside every single container running on the host, lets you see all the application, network, file, and system level activity.
So, of course, besides the usual mix of monitoring details you might want to track, if you have this type of instrumentation, you can also watch for anomalous security behavior and watch for intrusions. The nature of containers simply leads to the natural integration of monitoring and security functions.
You don’t have to do it that way, of course, but it becomes so easy that most container shops will eventually get there.
Organizations that are going all in with microservice architectures, on the other hand, have no choice but to make this leap.
Microservice architectures and supporting platforms are even more complex because they isolate functions to increase separation of components and to speed development. More freedom is given to developers and service teams to launch functionality sooner and more frequently. And, the more isolated the functions, the more permutations can be created by stitching together microservice components.
The end result, however, is a vast increase in the number of moving parts, and a monumental increase in the attack surface of these applications.
It is an attacker’s dream come true, and a nightmare for security professionals. That’s why microservice shops will find it imperative to integrate monitoring and security tasks into their platform if they haven’t done so already.
It is also why the DevSecOps movement is gaining steam so rapidly. The good news is, container environments provide opportunities to build in automated security scans at multiple points in the development cycle, which should mean the containers, in the end, will be much more robust, in a security sense, than even VMs.
A powerful set of open source security building blocks are appearing to help solve these container security problems. Tools like Anchore solve for scanning and known vulnerabilities; Falco solves for run-time security violations and activity auditing, and Inspect solves for forensics and incident response. Container security products are out there, and some, such as Sysdig offer unified security, monitoring, forensics, and troubleshooting, providing a single point of control for rapidly evolving container environments.
This simplifies deployment and eases management of containers, and enables organizations to move faster and improve the quality of the services they deliver from a risk, security and compliance perspective.
According to Gartner:
“At the application layer, there is no need to have two separate tools (one for security, one for operations) performing detailed monitoring of the service. At a minimum, the data will be shared across teams, but ideally application performance monitoring and security monitoring will merge into application monitoring and performance supporting a single DevSecOps team.”
The introduction of containers has upended many conventions and is requiring IT organizations to rethink everything. And while it seems likely that all organizations embracing containers will ultimately integrate monitoring and security functions in these environments, shops that employ microservice architectures have no choice but to head down that path today.
Disclosure: The author has invested in Sysdig.
Feature image via Pixabay.