As in many tech conferences these days, containers were the hot topic at the JFrog Swamp UP conference in Napa, California this week. But unlike the typical discussion of their benefits and pitfalls, many of the talks at this conference focused heavily on the care, feeding, and management of containers at scale.
Matthew Moore, software engineer at Google, gave a detailed explanation of how containers are built at Google. The talk focused heavily on Google’s internal build system, Bazel, which is also available as an open source project. Internally at Google, said Moore, the teams have been building and using containers for a long time. This means that there have already been a few attempts to smooth some of the rough edges of dealing with containers.
Specifically, Google uses Bazel to help build containers alongside streamlined distro-less base components. Those components, however, took some time to get right. Initially, said Moore, Google tried to get the smallest possible distribution of Linux for use with its containers. This was a steep challenge, as the team found that reducing the size of Debian or Ubuntu could only go so far before the actual underlying compatibility of the system would be sacrificed.
Such was the case with Alpine Linux, a super small distribution the team considered for a time. Unfortunately, Alpine greatly sacrifices compatibility in exchange for offering a tiny environment, and this was detrimental to the efforts inside Google attempting to make smaller containers, Moore said.
Thus, the team had to reconsider its efforts. Moore compared this to Henry Ford’s intimation that if he had asked people what they wanted, they wouldn’t have said better cars, they would have said faster horses. Thus, the Google team realized their ultimate goal was not to create a smaller Linux distribution, but rather, to run their applications.
Thus, Google has spent time building out base distro-less, language-focused container components. These are built around C/C++, Java, Python, and Node.js. These distro-less container image bases are available for external users, as well, and have various flavors to meet the needs of each of these languages and others derived from them, such as Scala, D, and Go. The distro-less Java components, for example, come in a number of flavors, including one specifically designed to layer Jetty into the image.
Jason Dobies, developer at Red Hat on OpenShift, said that building out a container-based infrastructure is about more than simply generating containerized applications and throwing them into Kubernetes. “OpenShift offers the additions on top of Kubernetes,” said Dobies, explaining OpenShift’s capabilities for rolling updates, management, and control.
He also advocated for the utility of JFrog’s Artifactory, a repository that can handle both container images and original applications. Dobies said Artifactory can hold, “Anything from other Docker images, to Java JARs, to Python archives. They can also hold onto our images as we build them, and manage those images, and make them available as we deploy them,” said Dobies.
Between the two presentations, there was an unsung hero, however: Linux package management systems. While Google’s solution focuses on Debian, OpenShift’s solution, naturally, uses Red Hat’s Yum. Still, both systems are central to being able to automatically build container images on the fly.
Shlomi Ben Haim, JFrog CEO, said that the company was founded to solve the day-to-day problems developers face in their software development lifecycle. Over the past eight years, the JFrog retinue of products has grown beyond just the repository to also include the binary repository Bintray, the repository inspection tool XRay, and the over-arching Mission Control platform for managing the entire suite of data storage tools.
Currently, JFrog’s primary Bintray repository receives 2 billion requests per month. He added that the company has seen 1,000 percent growth in revenue over the last four years, coupled with a similar growth in headcount and global footprint.
Haim said that part of the success of JFrog is that it leaps to the front of technological moves, such as the rise of containers. “When we released the Artifactory as a Docker registry in 2014, we saw something completely different. Most of our users that used Artifactory started to use Docker on top of it, and some of them also used RPM, Docker, and a build tool like Maven, Gradle, or whatever. The interesting part was we realized that there is no single user in the world that can use just Docker; that can use just containers. It’s just a way to pack and ship code. You have to pack other packages into this.”
Haim said that the growth in DevOps currently being experienced by most enterprises will have to come back to Earth at some point. “The biggest challenge for DevOps in next two years is that managers will start to ask questions,” said Haim. “Now, when you ask for budget, they give you budget because of the press. They will start to ask questions. Then there will be a very important era of business intelligence and analytics on DevOps.”
At the SwampUP conference, JFrog announced a new on-premise version of Bintray, to help enterprises manage the distribution of their binaries out to end users, devices, and servers. One big growth area for this product has been the Internet of Things, said Haim. Bintray allows for a single repository to be used for management of updates to edge devices, for example.
The move comes in response to a trend Haim said he’s seen growing over the past two years: the demand for hybrid cloud solutions.
Red Hat is a sponsor of The New Stack
Feature image via Pixabay.