It’s too bad the name has already been taken. More than once, engineers at Docker and other containerization firms have borrowed the “Container Store” image and metaphor to explain how they envision container registries to eventually work — much like Apple’s App Store or Google Play Store, or an actual brick-and-mortar retail outlet. They see implementers being able to mix-and-match software components through a more visual, Web-driven menu.
If that’s ever going to happen, the purveyors of these retail-like container hubs, including independent software vendors (ISVs), will need safety guarantees, not only for themselves but also their customers, Red Hat believes. Much of the value proposition for a private apps store for enterprises has been the ability to guarantee security, in practice, one big barrier to adoption has been that it’s up to the enterprises to make those guarantees.
Red Hat sees the container shop business as a potential revenue center, including for ISVs that are already heavily invested in Red Hat and OpenStack. But if it’s up to the ISVs to guarantee security for their customers, they may hold back. That’s the major reason behind Red Hat’s partnership with security technology provider Black Duck, announced last October, according to Red Hat General Manager for Integrated Solutions Lars Herrmann.
On Tuesday, a key element of that partnership paid off, with Black Duck’s addition of deep container scanning to its Black Duck Hub product.
Risk Assessment Forthcoming
“As operations and DevOps teams identify specific container images that are to be used to support applications,” wrote Black Duck product management vice president (and Adobe veteran) Dave Gruber on Tuesday, “they now have an automated means to identify and verify open source component versions, and detect the presence of any known security vulnerabilities. It also means that development teams can get warnings early in the build process when an out-of-date or vulnerable version is in use.”
Black Duck’s new technology officially premiered Tuesday in version 2.4.0 of Black Duck Hub. It doesn’t actually scan containers for vulnerabilities themselves. Nor does it simply check the manifests of containers for their file contents. Rather, it examines the files themselves, including code snippets, looking for recognized patterns. It then compares what it finds against its own metadata stores, which were originally compiled for a product called Protex. That product produces a “Nutrition Facts”-like label, rendering an analysis of whether code prepared for redistribution may be encumbered by any proprietary licensing restrictions.
One of Protex’ more notable customers is Intel [PDF], which utilizes the product to certify that software distributed outside of Intel is within the company’s legal rights and privileges.
As Randy Kilmon, Black Duck’s vice president of engineering, told The New Stack, the Protex engine evaluates files in a package, produces checksums, and compares them against a database for matches. Protex also couples this with what it calls a snippet matching engine. “This streams through bytes in a file and breaks them up into small pieces,” said Kilmon, “to see whether or not those smaller pieces, or aggregates of those smaller pieces, match. That’s how you would find a piece of copy/pasted code in another file.”
Black Duck Hub, he told us, will couple these approaches with a more logical approach that analyzes source code in its native context.
“It’s not just one file at a time anymore; it’s collections and groups of files, and looking for those magical places in the file system where things get installed when you run an RPM, or when you do a Yum install. We call this a constellation signature, if you will.” This way, modules whose components or functions get scattered all over a file space (for example, OpenSSL) can still be positively identified.
“Sure, there’s other ways to get at that information,” noted Kilmon. “You could query the package manager and ask it what’s installed. But this is a much truer way of understanding that because people can go around the package manager. People can put files in place themselves with make — maliciously, even. So having a very hard match to the file, we feel, is a much stronger way of understanding what’s there.”
Unlike Protex, Black Duck Hub is designed to operate as a kind of service — not in the cloud sense with all the hyphens, but as a library that renders a result in a RESTful fashion. This way, said Kilmon, Hub can be integrated into existing processes — including CI/CD, and especially Jenkins — without the introduction of yet another UI, with the extra roadblock that would imply. Conceivably, an automation policy could halt the agenda for a particular container if it failed to meet minimum compliance standards, or if the vulnerability rating for its included open source packages was at or above a specified level.
In an interview with The New Stack last October, Red Hat’s Lars Herrmann described a future, on-premise scheme involving OpenShift. Already, OpenShift uses a container registry of its own, where source images are automatically built and deployed. Red Hat’s engineers, said Herrmann, are building a control point into the automation process, giving Black Duck Hub an opportunity to integrate a policy-based decision into the build process.
Conceivably, Black Duck Hub could be leveraged to stop a container from being built in the first place, for example, if its build file contained non-compliant components. For now, Kilmon referred to this as an “aspirational” feature.
Ducks in a Row
For now, containers enrolled in private registries such as Docker Content Trust do pose a problem, ironically enough: Black Duck Hub can only scan a container if it’s unencrypted and capable of being scanned. Kilmon suggested that Hub can be inserted into a container’s build or management process at convenient times when it’s in the clear. Of course, that doesn’t resolve the issue of private registry customers evaluating containers for pulling and deployment.
It’s a complex issue, whose intricacies have yet to be fully considered, yet alone encountered. A vulnerability is not necessarily a bug or a defect, but the inability for a component of software to remain resilient when sharing an environment with other components. As such, a vulnerability may be discovered years after the software’s release not because it was lying dormant, but because it literally did not exist until then. So the declaration of a container as having “passed” a vulnerability scan may only be valid for a short period of time.
That said, avoiding periodic vulnerability scans for containers simply because they’re encrypted and the process is too difficult, may introduce serious inefficiencies into the container ecosystem as well.
It’s a quandary that Black Duck and its new partner, Red Hat, will inevitably face — sooner, rather than later. Red Hat’s Herrmann foresees the likelihood that a policy-based mechanism can make continual use of the “control point” it’s building into OpenShift. This way, for example, the addition of a new container image into the mix can trigger a re-scan of existing images.
In a best-case scenario, he said, the issuance of a vulnerability report in the proper format could trigger such a re-scan, leading to an identification and potential isolation of container images that include packages newly discovered to be at risk.
“In the long run, we want to get to a world in which, at any point in time, we can make a risk assessment about containers that are available to be deployed, or that are deployed inside the enterprise,” said Herrmann.
As Randy Kilmon confirmed, Red Hat will be working with his company toward building a distribution mechanism for Red Hat software, to be utilized by ISVs. Black Duck’s role in the process would be to ensure that containers distributed through this new scheme are both properly licensed, and reasonably free from vulnerability.
“The idea is, when an ISV submits a container for certification,” explained Herrmann, “it can leverage a Black Duck-provided assessment and analysis for that container that gives some additional insights, that it might want, [in order] to be confident about releasing this to the wild, or to customers, or to the Red Hat certification process. That’s really at the core of the partnership: jointly creating a value proposition and the tools that customers and ISVs need to achieve secure container content.”
Red Hat is a sponsor of The New Stack.