Kubernetes

Single Sign-On for Kubernetes: The Command Line Experience

23 Mar 2018 3:00am, by

In my last post, I discussed the different user authentication methods in Kubernetes. I explained how my team at Pusher were hoping to create a seamless Single Sign-On (SSO) experience for our engineers and how this journey started with an investigation into Open ID Connect (OIDC) and finding solutions to its shortcomings.

One of these problems is that Kubernetes has no login process. Ordinarily, the client software would initiate this login flow, but kubectl does not have this built in. Kubernetes leaves it up to you to design the login experience.

In this post, I will explain the journey we took to get engineers logged in from the terminal and the challenges we faced along the way.

Our Identity Provider

Joel Speed, Cloud Infrastructure Engineer, Pusher
Joel Speed is a DevOps engineer working with Kubernetes for the last year. He has been working in software development for over three years and is currently helping Pusher build their internal Kubernetes Platform. Recently he has been focusing on projects to improve autoscaling, resilience, authentication, and authorization within Kubernetes as well as building a ChatOps bot, Marvin, for Pusher’s engineering team. While studying, he was heavily involved in the Warwick Student Cinema, containerizing their infrastructure as well as regularly projecting films.

The first step to SSO was to set up Dex as our Identity Provider. Dex is configured to authenticate users with their Google GSuite accounts. It acts as a proxy to the authentication flow.

We host Dex on a collection of AWS EC2 instances behind an Elastic Load Balancer, exposing a single Dex endpoint to authenticate all of Pusher’s Kubernetes clusters. While you can run Dex in Kubernetes and have each cluster authenticated separately, we chose to have this centralized. This does, however, allow a compromised token to access all clusters. We decided this was a minor trade-off considering we can revoke tokens with Dex.

Connecting our Kubernetes clusters to Dex was just a case of adding a few parameters to our Kubernetes API server configuration:

# The URL where Dex was available
--oidc-issuer-url=https://auth.example.com/dex
# The client ID we configured in dex. Kubernetes will compare this to the `aud` field
# in any bearer token from Dex before accepting it.
--oidc-client-id=kubernetes
# Since Dex is configured with TLS, add the CA cert to initiate trust
--oidc-ca-file=/etc/kubernetes/ssl/dex-ca.pem
# The claim field to identify users. For us this means users are granted the username # of their Pusher email address
--oidc-username-claim=email

When presented with an id-token generated by our Dex cluster, Kubernetes can now verify the token and use the token to authenticate the user.

The current release of Dex does not support token refreshing with its OIDC connector. Because of this, Dex never checks back to Google to see if the user is still permitted to log in. We’ve submitted a Pull Request to fix this, and are currently running a custom build.

Connecting kubectl, the Hard Way

When starting out with Dex, I used their example app to generate my first ID tokens.

staticClients:
- id: kubernetes
redirectURIs:
- 'http://127.0.0.1:5555/callback' # Allowed redirect URI
name: 'Kubernetes API'
secret: <SOME_SUPER_SECRET_STRING> # Pre-shared client-application secret

By adding a static client to Dex with a callback to 127.0.0.1, I could run the example application on my laptop and use that to generate my first tokens. Note that since Dex never talks to the application directly it is acceptable to host a client on a loopback address.

Dex (and other OIDC providers) uses a whitelist of redirectURIs to verify the identity of the software requesting the user’s token. By providing the redirectURI in the initial request to Dex, Dex can issue the id-token to one of its known clients (in this case with ID kubernetes) and will expect the matching pre-shared secret to be presented by the client software during the token exchange phase of the authentication flow. This ensures trust and prevents man in the middle attacks.

./example-app -client-id=kubernetes -client-secret=<SOME_SUPER_SECRET_STRING> -issuer=https://auth.example.com/dex -issuer-root-ca=ca.pem

The above command starts a web server listening on 127.0.0.1:5555 (you may note this forms part of the redirectURI configured in Dex). By visiting this address, I could start the login flow and generate an ID token and a refresh token.

With this information, I added the following to my kubeconfig file:

- name: joel.speed@pusher.com
user:
auth-provider:
config:
client-id: kubernetes
client-secret: <SOME_SUPER_SECRET_STRING> # Pre-shared client auth
id-token: <TOKEN_RETRIEVED_FROM_THE_EXAMPLE_APP>
idp-issuer-url: https://auth.example.com/dex
refresh-token: <REFRESH_TOKEN_RETRIEVED_FROM_THE_EXAMPLE_APP>
name: oidc

kubectl can use this user configuration to talk to the Kubernetes clusters and, when the id-token expires, can use the refresh-token to obtain a new id-token.

While this “first experience” worked, I didn’t want to roll this out to the rest of our engineers. I wanted to create a user-friendly experience. Having to retrieve secrets, run an arbitrary tool and then copy information from the browser to a kubeconfig, didn’t feel very user-friendly to me.

Connecting kubectl, the User-friendly Way

To improve the experience, I looked to the gcloud auth login flow for inspiration.  If you haven’t used the gcloud login flow, you run the command from your terminal and it opens your browser. This takes you to Google’s login screen. Once logged in it instructs you to head back to the terminal where you are told that you have been logged in and your environment has been configured. Starting from the Dex example application, I built a tool (known as k8s-auth) to mirror this experience.

Pusher’s engineers sign themselves into Vault as part of their on-boarding. k8s-auth takes advantage of this. We store k8s-auth’s configuration in Vault and use the engineer’s Vault token to load this into the program at runtime. Therefore, if we ever need to change the pre-shared client-secret for example, we only need update it in Vault.

Rather than displaying the created tokens in the web browser, k8s-auth uses code from the kubernetes client library to configure the user’s kubeconfig for them. Since our clusters follow a naming scheme, I also added functionality to configure a new cluster and corresponding context as part of the same application.

When a new engineer joins the organization, to get kubectl set up and connected to our clusters they follow these instructions:

  • Sign in to Vault following our onboarding instructions
  • Install k8s-auth and kubectl
  • Run k8s-auth cluster1 cluster2 <whichever cluster names they wish to connect to>
  • Run kubectl config set-context to chose the cluster.

In the case that we ever revoke their tokens, all they need do is run k8s-auth again to generate a new id-token and refresh-token.

Conclusion

We set out to build a user-friendly SSO experience that our engineers could use to compliment kubectl. We found we liked the gcloud auth login flow and have managed to replicate that experience almost identically.

By extending the original brief and adding cluster configuration to the same tool, we’ve now given our engineers an easy way to set up kubectl for existing and future clusters.

While I can’t open source our specific version of k8s-auth, I have created an example which is an abstracted version of it. You can use the example as is to perform the OIDC login flow or you could use it as a base to create a more specific login tool for your clusters.

kubectl is not the only way that our engineers access the API however. The Kubernetes Dashboard doesn’t provide you a way to perform the OIDC login flow either. In my next post, I will explain the Dashboard SSO experience that we have designed and again, how you might replicate this yourself.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.