At KubeCon+CloudNativeCon in Copenhagen in May, many talks focused on the work required to build continuous integration and continuous deployment (CI/CD) pipelines using containers. One of the major issues still remaining in the container world is specifically that last bit of the CI/CD pipeline: building, storing, and securing containers built for internal software projects.
Steve Speicher, principal product manager on the Red Hat OpenShift Team, spent a good deal of time at KubeCon looking into the solutions and remaining pain points that exist around dynamically building and managing containers within a more traditional agile development environment. “A lot of people want to leverage Kubernetes for the build in the pipeline itself, and so that’s one of the things we’re talking about here and learning more about what people are interested into leverage the platform to do more CI/CD,” said Speicher.
“You have people who have build CI/CD farms and infrastructure and then their deployment platform,” added Ben Parees, principal engineer at Red Hat. “With Kubernetes and OpenShift you have the opportunity to put that all in one, so your cluster is both your build platform, your test platform, and your deployment platform. It’s easy to scale up multiple instances, to test them, run your builds there,” said Parees.
“Kubernetes is a platform to run images, but you are on your own to produce those images,” said Parees, highlighting a remaining gap in the workflow for many enterprises looking to adopt Kubernetes. “Now we’re starting to see people getting interested in the problem of ‘How do I build those images securely, reproducibly, and again, can I use the platform? Docker Build requires a lot of permissions that you may not want to give everyone to produce your images. It’s fine to do it on your workstation, maybe not OK on a shared cluster. So now we’re starting to see some additional tools coming out, things like the S2I tool, the Kaniko tool, and a few other tools around building images that give more flexibility and do it in a secure way on platform,” said Parees.
Reproducibility is a big deal for building container images, said Speicher. He said one of the values for a system like those developed by Red Hat is around patching routines. “You need to roll that out in a secure way,” said Speicher. “And you need to make sure when you do that, it’s a repeatable process. Through that, we have a centralized registry where you can have the content, and then you can have an image change trigger so it can notify the build whenever these images change. So when that happens, it’s going to happen at a pretty large scale, and it’s going to happen to a large number of applications, you want to make sure that system can automatically build those things and then reproduce the final deployable image in a way that’s consistent.”
In this Edition:
2:01: What are the frictions in CI/CD that are still hampering developers?
3:37: How are container images now being built for Kubernetes?
7:16: Where is the developer experience now when developers are starting to go out and build containers themselves, and what is the guidance as it progresses into Kubernetes distribution?
10:10: What does S2I produce?
15:14: What is the infrastructure Red Hat has built to allow for these more declarative environments?
19:20: Addressing the speed aspect in building single-image-layer applications with S2I.
Red Hat sponsored this post.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.