Analysis / Top Stories

Single Sign-On for Kubernetes: Dashboard Experience

30 Mar 2018 10:16am, by

Joel Speed, Cloud Infrastructure Engineer, Pusher
Joel Speed is a DevOps engineer working with Kubernetes for the last year. He has been working in software development for over 3 years and is currently helping Pusher build their internal Kubernetes Platform. Recently he has been focusing on projects to improve autoscaling, resilience, authentication, and authorization within Kubernetes as well as building a ChatOps bot, Marvin, for Pusher’s engineering team. While studying, he was heavily involved in the Warwick Student Cinema, containerizing their infrastructure as well as regularly projecting films.

Over my last two posts (part 1 and part 2), I have investigated user authentication in Kubernetes and how to create a single sign-on experience within the Kubernetes ecosystem. So far I have explained how Open ID Connect (OIDC) works, how to get started with OIDC and how to perform a login from the command line.

The final piece of this puzzle is the Kubernetes dashboard, often used by our engineers alongside kubectl. To complete our move to SSO, we wanted to ensure that, when using the Dashboard, our engineers logged in to the same account they used for kubectl.

Since Kubernetes version 1.7.0, the dashboard has had a login page. It allows users to upload a kubeconfig file or enter a bearer token. If you have already logged into the command line, this allows you to copy the OIDC id-token from your kubeconfig file into the bearer token field and login. There are, however, a couple of problems with this:

  • The login page has a skip button — If you aren’t using any authorization (RBAC) then this would permit anyone to access the dashboard with effective admin rights.
  • Copy and pasting a token from a file isn’t user-friendly.

Alternatively, the dashboard supports the use of authorization headers to supply bearer tokens (Authorization: Bearer <OIDC-id-token>). This allows for pre-generation of the OIDC id-token and injecting the header before the dashboard is loaded. If we could ensure that every request to the Dashboard contained this header, then we could skip the dashboards login screen and avoid the aforementioned problems.

Authentication Proxy

At Pusher, we had already been using the Bitly OAuth2 Proxy to protect some of our internal sites. It supports OIDC and is therefore compatible with Dex. Initially, it looked as though I could use it to generate the authorization headers for the dashboard. Unfortunately, though, it wasn’t quite ready for this use case:

  • Accessing the ID Tokens: While it could connect to Dex and authenticate users, the proxy did not expose the id-token needed for the authorization header. With this PR, the OAuth2 Proxy can expose an authorization header compatible with the Kubernetes dashboard when running in both proxy mode and in its Nginx Auth Request mode.
  • Running the proxy centrally: We wanted to design our system to be as scalable as possible. If we were to run a copy of the OAuth2 Proxy on each of our Kubernetes clusters, then our Dex configuration would need updating every time we added a new cluster, a new callback URI would be required to point to each cluster. With this PR, the OAuth2 Proxy can accept a redirect request to subdomains of a whitelisted domain. By whitelisting the domain that our Kubernetes clusters belong to, we can host a central OAuth2 Proxy that doesn’t need any reconfiguration when we add new clusters.

With these additions to the OAuth2 Proxy, we added it to our existing Dex cluster and configured it as a client of Dex. I’ve included a snippet of the proxy configuration relevant to the above the PRs for example:

# Subdomains of kube.pusherplatform.io are allowed for redirection
--whitelist-domain=.kube.example.com
# Cookie needs to cover all whitelisted domains
--cookie-domain=.kube.example.com
# Set an Authorization header in the auth response
--set-authorization-header=true

Injecting the Headers

With the OAuth2 Proxy configured on our authentication cluster, it is now time to connect the dashboard to it. To do this, we take advantage of Nginx’s Auth Request module within our Ingress Controller.

By adding the following snippet to the Ingress object for the dashboard, we can use Nginx to check with the OAuth2 Proxy (in turn checking with Dex and Google) to determine whether the user is logged in or not, before it allows access to the dashboard.

# For an OAuth2 Proxy hosted at https://auth.example.com/oauth2

# Configure Nginx Auth Request Module
ingress.kubernetes.io/auth-url: "https://auth.example.com/oauth2/auth"
ingress.kubernetes.io/auth-signin: "https://auth.example.com/oauth2/start?rd=https://$host$request_uri$is_args$args"

# Proxy Authentication header to Dashboard
ingress.kubernetes.io/configuration-snippet: |

# adds authorization header for kubernetes-dashboard
 auth_request_set $token $upstream_http_authorization;
 proxy_set_header Authorization $token;

With this configuration, on a request to the dashboard, the following happens:

  1. Nginx sends a request to the auth-URL, the auth endpoint of the OAuth2 Proxy
  2. The OAuth2 Proxy returns a 202 if the user is logged in and a 401 if the user isn’t logged in.
    • If Nginx receives a 202, it allows the request to the dashboard and proxies the authorization header in the auth response to the Dashboard.
    • If Nginx received a 401, it redirects the user to the auth-signin endpoint which then starts the login flow.

When a user first visits the dashboard, they are transparently redirected via Dex to Google to log in. Once they log in with Google, they are then redirected back to where they were. At this point they will be faced with the Dashboard, skipping the login screen since they are now authenticated using an authorization header.

Conclusion

With the above system, we can now ensure that every request to the Kubernetes Dashboard is authenticated. Our engineers tend to be signed in to Google already and they often don’t even notice the dashboard login flow, their browser just redirects them straight through and back to the dashboard.

In combination with the command line experience discussed in my last post, we have migrated Pusher’s Kubernetes authentication to a Single Sign-On system. Each engineer logs into the clusters individually and importantly, we don’t have and extra user accounts to manage.

While the initial single sign-on setup took some time, we are very pleased with the outcome and the user-friendly experience our engineers now have.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.