Cloud Native / Containers / Tools / Sponsored / Contributed

Container Images the Easy Way with Cloud Native Buildpacks

27 Aug 2021 7:00am, by

Daniel Mikusa
Daniel is a developer of the Java buildpacks at VMware, a maintainer with the Paketo project and a longtime proponent of buildpacks.

When I was in elementary school, my math teacher played a trick on me. She explained the concept of long division and proceeded to make my classmates and me perform long division on the chalkboard, on homework, on tests, and we always had to show our work. I don’t know how much time we spent on long division, but I do know how much time I’ve spent doing long division since I got a calculator, and that is none.

However, learning a process isn’t a waste of time. Learning things the hard way can bring a greater understanding of your task and a greater appreciation for your tools. At the same time, an accountant isn’t going to perform long division to balance a company’s books; they’ll use a computer to ensure the math is correct.

This has been my experience recently with Dockerfiles. I’m glad I learned about them; I’m glad I have some experience with them; and I’m glad I understand how they work. At the end of the day, though, it’s not where I want to spend my time. This might sound like something you’d see on a bumper sticker, but I’d rather be coding. Despite this, I do absolutely need to get my software packaged into a container image so it’s easy to distribute and easy to use. I just want to do this with minimal effort.

I could tell you about the easy way right now, but, like my elementary school math teacher, I’m going to tell you about the hard way first.

The Hard Way

If you’re an application developer writing Dockerfiles, you are doing it the hard way, whether you realize it or not.

What’s so hard about Dockerfiles? Nuance. Here’s an example to illustrate this point:

Seems simple, right? It is, and on some level, it works to get the job done. There are some things wrong with this approach, though:

  1. Security: Your security team will have a problem with images generated using this Dockerfile because they’ll run as root.
  2. Flexibility: Your operations team might want to use the image, but to use it, they’ll need to pass in different arguments to the entry-point command. You can’t do that, at least not without rebuilding your image or totally overriding the command.
  3. Speed: You probably want faster build times. Who doesn’t? That means you need smarter layering, caching and possibly even multistage builds.
  4. Additional software: What if you need Tomcat to run your application?
  5. The unknown: Are you sure that you’re following all the best practices?

Sure you can address all these points with Dockerfiles, but that’s where the complexity is introduced. That’s where you’re forced to invest time in developing and maintaining your Dockerfiles. That’s where things get hard.

For most of my career, I have been a Linux user. Anyone who has run a Linux desktop or server should be familiar with running package updates. Every so often, you go onto your rig, check to see what is new and apply the updates. When that’s done, you restart, and while your machine is reloading with its shiny new kernel, you reflect on how you’ve protected your computer from the scourges of the internet for another week.

If you have a couple of servers, it’s easy to keep them all up to date. I’d say it’s even fun, but that might just be me. If you have 100 or 1,000 servers, though, that’s not fun. Updating them manually is doing it the hard way.

The same can be said for Dockerfiles. If you are working on a project, managing its Dockerfiles can be done without too much fuss. Maybe you even enjoy it. If you’re working with more than a handful of projects though, that quickly becomes a burden. You’ve got multiple Dockerfiles, across multiple repositories, some of which might have unique, project-specific requirements and all of which need to be maintained.

This is also doing it the hard way.

The Easy Way

For younger me, it was the calculator that freed me from long division. For me today, it’s buildpacks and the tool that drives them, the pack CLI, that freed my projects from Dockerfiles.

What is the easy way? One single command: pack build my-cool-app-image.

Want to package your Java, .NET Core, Python, Ruby, PHP, Node.js, Go or Rust app? It’s all the same command. Wait. That can’t be. How does this work? What sort of sorcery is afoot? Well, I’m glad you asked.

Buildpacks can be explained very simply. Insert some source code and receive a container image. What happens in the middle is determined dynamically by the buildpacks and is custom-tailored to your application. Buildpacks understand your code, what’s required to build it, what’s required to run the software and how to compose that all together into an image. They wrap up all the nuance and work you’d put into Dockerfiles and encapsulate it in code so you don’t have to think about it. You have the image you need, and you can get back to developing your software.

Still skeptical? Here’s exactly how buildpacks can help you generate quality images.

  1. No more Dockerfiles: Go ahead, rm Dockerfile from your project. It’s fun.
  2. Secure trusted base images: When the buildpacks execute, they do so in an image derived from a base build image. When your app executes, it does so in an image derived from a base runtime image. You can pick your base images, and there are secure base images available from respected and trusted organizations like the Cloud Foundry Foundation, Heroku and Google. In addition, because there are only two base images, this significantly reduces the number of images your security team will need to audit.
  3. Bill of materials: As buildpacks run, they generate a bill of materials for what has been installed into the image. This can help in a number of ways, but most notably it will help you answer questions from auditors and the security team.
  4. Reproducible image builds: Given the same input, you’ll get the same output image. This is handy if you need to go back and rebuild an image or validate an image. The hashes should match because image builds are reproducible.
  5. Caching to enable super-fast builds: Buildpacks are intelligent and cache aggressively so that rebuilding your application is always super fast. This requires no additional work or thought; it comes out of the box.
  6. Best practices are included by default: No more worrying about layer order, tricks to reduce layer size, tricks to cache fetched resources or what to copy through with multistage builds. Buildpacks will consistently apply best practices to all of your images.
  7. Integration with other popular tools: While you can use the pack CLI, there are also integrations for Spring Boot, CircleCI, GitLab, Tekton, VMware Tanzu Build Service on Kubernetes and more.
  8. Lightning-fast image upgrades: Base image upgrades can be applied without doing a rebuild through a process called rebasing. This allows you to upgrade your entire fleet of images in a fraction of the time it would take to rebuild it.

Learn to Fly With Buildpacks

If you’re still reading, you’re probably wondering where to go next. Don’t worry, I’ve got you covered. The following sections contain some suggested next steps for your adventure.

Getting Started

If you’re brand new and just getting started creating your first container images, this is the path I would recommend.

  1. The Buildpacks Getting Started Guide should be your first stop. It’ll walk you through installing pack and building your first app. It also covers basic customization, like setting environment variables, selecting buildpacks, mounting volumes, customizing launch processes and using project.toml, which is a convenient way to persist settings for your app. If you’d rather not install anything just yet, you can check out the Katacoda tutorial instead.
  2. While the Buildpacks project provides a specification for buildpacks, it does not provide a comprehensive set of buildpacks. The Paketo project does exactly this, providing a comprehensive set of buildpacks for all your favorite languages. As a next step, I would suggest taking a look at the Paketo Getting Started Guide. This has more samples you can try and provides information specific to your language of choice.
  3. At this point, you probably want to run your images, so take some time to get familiar with Docker, excellent for running the images locally, or Kubernetes, which is where most app images run in production. This step isn’t related to buildpacks, but it’s important to understand these tools because buildpacks only build images. You need something else to run them.

Migrating to Buildpacks

If you’re a veteran with Dockerfiles, you’re probably more interested in how you can migrate from them to buildpacks. While there’s not enough space for a comprehensive guide in this article, I can give you a few tips to help speed your migrations.

  1. Install the pack CLI. It’s in most package managers, or you can get it from GitHub.
  2. The primary pack command you’ll be interested in is pack build, so take a look at its usage/docs.
  3. Again, the Buildpacks project provides the specification for buildpacks and does not provide a comprehensive set of buildpacks. There are different implementations available.You can run pack builder suggest to get the up-to-date list of these implementations, which the buildpacks spec calls builders. I would recommend the builder from Paketo as it has great language support and a welcoming open source community, but there are quality builders from Google and Heroku as well.Don’t overthink this decision. Pick one that supports the languages you require, has a stack (build and runtime base images) that works for you, and works well with your deployment platform of choice.
  4. Start simply and run pack build against your application and see what happens. In many cases, this will just work. If you need to configure things further, this guide walks through setting environment variables, mounting volumes and persisting settings with project.toml. If you’re using the Paketo implementation, you can take a look at the language-specific reference documentation, for example, Java, .NET, Ruby, Go, Node.js, PHP, which explains additional configuration options that can be passed into the buildpacks, such as selecting specific versions of a language runtime or passing additional arguments to build tools. Also, the Paketo Buildpacks configuration documentation page explains more advanced topics like Procfiles, service bindings, custom labels, working offline/behind a proxy and adding custom CA certificates.

Join the Community

If you have questions or encounter issues you can’t resolve, there are welcoming communities ready to help. Here are a few channels through which you can reach out to get assistance.

  1. The easiest path to get help is to just post a question on StackOverflow. Use the tag buildpack to direct your question to the community. If you have a question about the Paketo project or buildpacks, you can also use the paketo tag.
  2. If you’re into Slack, you can post questions to either the Buildpacks Slack channel or the Paketo Slack channel. Don’t worry too much about where you post (just don’t double post). There is overlap in these groups, and the friendly community will point you in the right direction.
  3. You can also post questions or report bugs you find on the project’s GitHub organizations, either Buildpacks or Paketo. There are a lot of different projects under the organizations, so again, don’t worry too much about where you post; community members can help make sure reports get to the right teams.
  4. If you want to get involved and contribute back to the community, Buildpacks and Paketo are both open source and welcome all contributions.

Everything mentioned above is free (as in beer and as in freedom). It’s all part of the open source communities around buildpacks. However, if you’re looking to adopt buildpacks in your professional endeavors and would like an expanded set of buildpacks, offline/air gap-compatible buildpacks, build automation on top of Kubernetes and commercial support, those features are available through VMware Tanzu Build Service.

The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Docker.

Photo by Scott Webb from Pexels.

A newsletter digest of the week’s most important stories & analyses.