Containers / Kubernetes

Deletion and Garbage Collection of Kubernetes Objects

6 Dec 2017 9:00am, by

This contributed article is part of a series, from members of the Cloud Native Computing Foundation (CNCF), about CNCF’s Kubecon/CloudNativeCon, taking place this week in Austin, Dec. 6 – 8. 

Maarten Hoogendoorn
Maarten is an engineer at Container Solutions, where he helps clients with containerizers, build systems, orchestrators and CI/CD pipelines. Maarten enjoys programming in Rust, and building/deploying software declaratively with Nix. He also organizes the Amsterdam Nix and Rust meetups.

With the Kubernetes container orchestration engine, concepts and objects build on top of each other. An example we described previously is how deployments build on top of replica sets to ensure availability, and replica sets build on top of Pods to get scheduling for free.

What exactly happens when we delete a deployment? We would not only expect the deployment itself to be deleted, but also the replica sets and pods that are managed by the deployment.

This problem is solved by garbage collection (GC). Before GC was introduced in Kubernetes 1.8, this was handled by the client and/or hardcoded in the controllers for a specific resource. Obviously, the client could fail halfway through the deletion of the deployment and its components, leaving the system in a limbo state that had to be manually cleaned up afterward. Not ideal for a system that aims to not require human operators to work reliably.

So, back to garbage collection. You’ve probably heard of it already in regards to programming languages.

The classic algorithm for Garbage Collection, mark-and-sweep, assumes that

  1. Each allocation/object knows which children objects it “owns.”
  2. When the program is paused, we can inspect the “root set” (e.g. which variables are in scope).

The collection process then works by:

    1. Pausing program execution,
    2. Marking all reachable references from your current position as “alive,” starting from the root set,
    3. Iterate through all allocations,
      1. Freeing those who are not alive,
      2. Mark the survivors as “dead,” to prepare for the next GC round.

The animation below shows how this works.


Ownership in Kubernetes

Kubernetes also has a garbage collection system, but it works the other way around! In classical GC each object knows which other objects it owns (left in the figure below), but in Kubernetes, the owned object contains an OwnerReference to its owner.

Let’s see how these references look like in practice.

Create a deployment via kubectl run, as shown below. This will cause the deployment controller to create a ReplicaSet, with one replica (which means it will only start one pod).

Now let’s inspect the ownerReferences of the ReplicaSet. (If you want to know how Deployments, ReplicaSets and Pods relate to each other, check out our previous post.)

NOTE: We used the very handy jq utility here to get just the output we want. We get back both the and the metadata.ownerReferences of the ReplicaSet object.

And yes, we can see that the replica set object has  metadata.ownerReferences set, and that the owner is a deployment with the name ‘my-nginx’.

And now for the pod associated to the deployment:

We see indeed that the owner is the replica set named “my-nginx-85584476c8.”

Deleting Objects: Three Variants

There are three different ways to delete a Kubernetes object, by setting the propagationPolicy on the deletion request to one of the following options:

  • Foreground: The object itself cannot be deleted before all the objects that it owns are deleted.
  • Background: The object itself is deleted, after which the GC deletes the objects that it owned.
  • Orphan: The object itself is deleted. The owned objects are “orphaned.” by removing the reference to the owner.

Let’s see how we can invoke them! Unfortunately, kubectl does currently not support setting the propagation policy. We need to access to Kubernetes’ API server directly to be able to set the propagation policy.

An easy solution to get access to the API server is via the kubectl proxy command, which will handle the authentication of all your requests.

Start the kubectl proxy, and keep it running whilst you’re performing the curl requests:

Foreground policy

To delete an object with the foreground propagation policy, run the following curl command:

It will respond with something similar to the following output (I removed some irrelevant output)

As you can see, there is now a deletionTimestamp, which marks the object read-only for users. Also, a list of finalizers is added. The only operation that can be applied to the object by Kubernetes, is removing finalizers and updating its status. The foregroundDeletion finalizer is handled by the garbage collection system, which will delete the replica sets first, before removing the deployment. Once all finalizers have been removed, the object itself is removed from Kubernetes.

Background Policy

A background deletion is a lot simpler.

It just deletes the deployment itself, after which the GC system has to figure out that the owner of the replica set is deleted. The replica set is then garbage collected.

Orphan Policy

The last option to delete an object is to use orphan propagation. This will remove the ownerReferences from the replica set, and delete the deployment.

We now check all deployments, replica sets and pods. In the output we only see replica sets and pods, no deployments:


And indeed, the ownerReferences have been removed from the replica set…

Want to learn more? Check out the Kubernetes reference manual section on Garbage Collection.

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image via Pixabay.

A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.