A Primer on Kubernetes Access Control

With Kubernetes gaining ground, many developers and administrators are familiar with the concepts of deploying, scaling and managing containerized applications. One area of Kubernetes that is critical to production deployments is security. It’s important to understand how the platform manages authentication and authorization of users and applications.
This series will take a practical look at authentication and authorization of users external to Kubernetes and pods that are internal to the platform. I will also explain how to use roles and role bindings to allow or restrict access to resources.
To follow the steps explained in the walkthrough, you need the latest version of Minikube and kubectl running on your machine.
API Server — The Gateway to Kubernetes
Kubernetes is all about objects and an API that provides access to those objects. Nodes, labels, pods, deployments, services, secrets, configmaps, ingress, and many more resources are treated as objects. These objects are exposed via simple REST API through which basic CRUD operations are performed.
One of the core building blocks of Kubernetes is the API Server which acts as the gateway to the platform. Internal components such as kubelet, scheduler, and controller access the API via the API Server for orchestration and coordination. The distributed key/value database, etcd, is accessible only through the API Server.
Kubectl, the Swiss Army knife to manage Kubernetes is just a nifty tool that talks to the API Server. Anything and everything sent from kubectl ultimately hits the API Server. Multiple other tools and plugins directly or indirectly use the same API.
Even before an object is accessed or manipulated within the Kubernetes cluster, the request needs to be authenticated by the API Server. The REST endpoint uses TLS based on X.509 certificate to secure and encrypt the traffic. Kubectl looks up the file, ~/.kube/config to retrieve the CA certificate and client certificate before encoding and sending the request.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
apiVersion: v1 clusters: - cluster: certificate-authority: /Users/janakiramm/.minikube/ca.crt server: https://192.168.99.100:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /Users/janakiramm/.minikube/client.crt client-key: /Users/janakiramm/.minikube/client.key |
The file ca.crt represents the CA used by the cluster and the files, client.crt and client.key, maps to the user minikube. Kubectl uses these certificates and keys from the current context to encode the request.
Can we access the API Server through curl? Absolutely!
Even though the common practice is to use the tunnel by running kubectl proxy, we can hit the endpoint by using the certificates available on our machine. Apart from CA cert, we also need a token encoded in base64 embedded in the header.
The below commands show how to retrieve the token and invoke the API from curl.
1 |
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}' |
1 2 |
Cluster name Server minikube https://192.168.99.100:8443 |
1 |
export CLUSTER_NAME="minikube" |
1 |
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}") |
The next important thing is to grab the token associated with the default service account. Don’t worry about this entity. We will get a better understanding of it in the later sections of this series.
1 |
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -D) |
Now, we have all the ingredients to cook the right curl request.
1 2 3 4 |
curl -X GET \ --cacert ~/.minikube/ca.crt \ --header "Authorization: Bearer $TOKEN" \ $APISERVER/version |
The Three Layers of Kubernetes Access Control
As explained above, both users and pods are authenticated by the API server before they can access and manipulate the objects.
When a valid request hits the API Server, it goes through three stages before it is either allowed or denied.

1. Authentication
After the request gets past TLS, it passes through the authentication phase where the request payload is inspected by one or more authenticator modules.
Authentication modules are configured by the administrator during the cluster creation process. A cluster may have multiple authentication modules configured, in which case each one is tried in a sequence until one of them succeeds.
Some of the mainstream authentication modules include client certificates, password, plain tokens, bootstrap tokens, and JWT tokens (used for service accounts). The usage of client certificates is the default and the most common scenario. For a detailed list of authentication modules, refer to Kubernetes documentation.
It’s important to understand that Kubernetes doesn’t have a typical user database or profiles to authenticate users. Instead, it uses arbitrary strings extracted from X.509 certificates and tokens and passing them through the authentication modules. External authentication mechanisms provided by OpenID, Github, or even LDAP can be integrated with Kubernetes through one of the authentication modules.
2. Authorization
Once an API request is authenticated, the next step is to determine whether the operation is allowed or not. This is done in the second stage of the access control pipeline.
For authorizing a request, Kubernetes looks at three aspects – the username of the requester, the requested action, and the object affected by the action. The username is extracted from the token embedded in the header, the action is one of the HTTP verbs like GET, POST, PUT, DELETE mapped to CRUD operations, and the object is one of the valid Kubernetes objects such as a pod or a service.
Kubernetes determines the authorization based on an existing policy. By default, Kubernetes follows the philosophy of closed-to-open, which means an explicit allow policy is required to even access the resources.
Like authentication, authorization is configured based on one or more modules such as ABAC mode, RBAC Mode, and Webhook mode. When an administrator creates a cluster, they configure the authorization modules integrated with the API server. If more than one authorization modules are in use, Kubernetes checks each module, and if any module authorizes the request, then the request can proceed. If all of the modules deny the request, then the request is denied (HTTP status code 403). Kubernetes documentation has a list of supported authorization modules.
When you use kubectl with the default configuration, all requests go through because you are considered as the cluster administrator. But when we add new users, by default, they have restricted access.
3. Admission Control
The last and final stage of a request passes through the admission control. Like authentication and authorization steps, admission control is all about pluggable modules.
Unlike the previous two stages, the final stage may even modify the target objects. Admission control modules act on objects being created, deleted, updated or connected (proxy), but not reads. For example, an admission control module may be used to modify the request for the creation of a persistent volume claim (PVC) to use a specific storage class. Another policy that a module can enforce is the pulling of images each time a pod is created. For a detailed explanation of the admission control module, refer to Kubernetes documentation.
During the access control process, if any admission controller module rejects, then the request is immediately rejected. Once a request passes all admission controllers, it is validated using the validation routines for the corresponding API object, and then written to the object store
In the next part of the series, we will take a closer look at creating users and configuring authentication for them. Stay tuned.