In what could prove to be the most ambitious move to address container security to date, Red Hat announced Tuesday a partnership with security firm Black Duck.
The goal is to provide a view of a container’s contents so deep and revealing that a future policy-driven system could approve or deny a container’s deployment, especially in production, based on what the source code of the executables reveals about what the container could do.
“Black Duck has a number of very interesting tools, mostly under the Black Duck Hub brand, that allows developers to actually check in, or analyze source code or binaries,” said Red Hat General Manager for Integrated Solutions Lars Herrmann. The tools could offer insight about where the source code came from, and what security and licensing implications running such software would entail.
One Less Wall
Open source projects are typically comprised of open source code, legally acquired from elsewhere. In the old world of open source systems development, it was relatively easy to package components together in a tarball or other package, where an inventory system could inspect its contents. Asset management software has come along to help data centers keep track of their multitude of open source licenses, and now many organizations require them for meeting compliance and regulatory guidelines.
With more organizations, especially in the financial services industry, relying upon independent risk management to ensure the integrity of their data centers, proponents of containerization have, to date, not been able to conjure a technology for analyzing and ensuring the contents of containers, one that is as automated or as believable as more mature service delivery systems, like VMware vRealize Orchestrator.
This could change if Red Hat, with the help of Black Duck, is able to bring its vision of container integrity management to fruition.
You may be familiar with security software that scans for known, malicious code by way of signatures. Black Duck Hub scans for known, open source code in a similar fashion, with the aim of identifying code that may be in need of replacement with newer, and more secure, versions.
“Black Duck Hub is able to track down versioning information on the source code, and also run comparisons against the app stream repositories,” said Red Hat’s Herrmann. Everything in the container is examined — the runtime components, all of the application code, or any additional frameworks or libraries that the developer chose to use. The resulting analysis may offer insights “as to what might be problematic about a container image,” Herrmann said.
Herrmann painted a picture of a future, probably co-branded, service that can render a general risk analysis of the integrity of code within containers. What’s more, Herrmann perceives the possibility of a kind of policy-driven pipeline that can be integrated into existing orchestration systems such as Kubernetes, enabling DevOps professionals and admins to craft rules and policies that restrict the execution of high-risk code in production environments.
Such a system (admittedly, at this stage, theoretical) could eliminate the primary reservation that organizations hold with respect to deploying container environments in production — a reservation that surveys continue to show keeps most containerized environments in production today constrained within limited virtual machines.
Herrmann told The New Stack that this future system, which is the goal of this new partnership, would facilitate new and expanded enterprise change management and inventory systems.
“Our vision is an open architecture where you have a variety of APIs available that aim at different levels of the stack,” he said. “So you could go directly to Black Duck to get an inventory view of which container images are already in your registry, and their risk assessment at any point in time.”
Through a Red Hat API, perhaps through OpenShift, the inventory data returned through Black Duck could then be correlated into an index of what Herrmann calls “current exposure.” A dashboard could conceivably produce a differential list, assessing the relative exposure levels of code currently running in production, against code in development or testing.
Policy-based decision making could then determine whether the benefits of new code within containers waiting to be deployed outweigh the risk factors by a respectable margin. Such controls may be absolutely necessary to the approval of containerization within enterprises that have already grown accustomed to, and to a great degree dependent upon, the reassurances and risk indexes provided by VM monitoring systems.
“We see a lot of innovation, and a lot of creative thinking, happening in the areas around container management and container security right now,” Herrmann told The New Stack. “So we want to enable this to address a key barrier to adoption of containerization. Every customer I talk to is asking me, ‘How can I make this secure? How can I know what’s going on? What are the tools that are available to me?’ So we can give them a lot of tools, but also all the APIs so that other tools can be built around it. And we can create a vibrant, open ecosystem around containers for the enterprise.”
Red Hat and VMware are sponsors of The New Stack.