Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
Kubernetes / Security / Storage

Single Sign-On for Kubernetes: An Introduction

Mar 9th, 2018 9:06am by
Featued image for: Single Sign-On for Kubernetes: An Introduction

Joel Speed, Cloud Infrastructure Engineer, Pusher
Joel Speed is a DevOps engineer working with Kubernetes for the last year. He has been working in software development for over 3 years and is currently helping Pusher build their internal Kubernetes Platform. Recently he has been focusing on projects to improve autoscaling, resilience, authentication, and authorization within Kubernetes as well as building a ChatOps bot, Marvin, for Pusher’s engineering team. While studying, he was heavily involved in the Warwick Student Cinema, containerizing their infrastructure as well as regularly projecting films.

I am not the most organized person in the world. For me, having to remember numerous logins for services I access both personally and for work becomes a nightmare.

Whenever I sign up to a new website, I immediately look for a “Sign in with …” link to see if it can grab my details from somewhere like Google or Facebook. I find the Single Sign-On experience a pleasure to use, it saves me having to create more accounts and reduces the number of times I have to sign-in in a day.

As a Cloud Engineer at Pusher, I work with Kubernetes every day. For a while, authenticating with our Kubernetes clusters was far from the ideal Single Sign-On experience. We had been using a single shared certificate for authentication since the beginning of our Kubernetes journey, but we wanted to make each engineer have their own credentials. To this end, I set out to make our Kubernetes login experience as simple and easy to use as the “Sign in with…” experience I’d become familiar with from other services.

One of the great things about Kubernetes is that it completely separates authentication and authorization. Authentication (Authn) meaning the act of identifying who the user is and authorization (Authz) meaning the act of working out if they’re allowed to perform some action. This can be thought of in terms of a Passport and a Visa. At border control, they check my Passport (Authn) to see I am who I claim to be, and then they check my Visa (Authz) which says I am allowed to travel to their country.

In this post, I’m going to talk about authentication within Kubernetes and in particular, its approach to Single Sign-On. I won’t go into too much detail about our specific set up, however, I will follow this up with a more technical article explaining our current authentication flow and how you might configure this yourself.

What Authentication Methods does Kubernetes support?

For engineers interacting with the API, Kubernetes has three main authentication methods. There’s a page listing more authentication methods but the three below are the more common ones for user authentication.

Static Passwords

This is otherwise known as Basic Auth. This doesn’t scale well since adding a new user requires updating a file on each API server node and then restarting each API server. We ruled this method out pretty quickly.

X.509 client certificates

With this approach, each developer has their own certificate that they present to the API server when establishing a connection. The API server then validates the certificate and uses the information within the certificate to identify the user for this session.

There are a few problems with certificate authentication:

  • Certificates have an expiry time that is set when they are issued. They will authenticate the user until this time (without OCSP anyhow).
  • Certificates have to be signed by some common certificate authority (CA). Kubernetes needs a copy of this CA certificate to validate client certificates. If you allow people access to these CA certs to sign their own certificates, they will be able to grant themselves any group credential or identity they want. This would allow privilege escalation, so you have to manage to issue certificates centrally.
  • Issuing certificates is not easy and as such certificates are often issued with long lifetimes.
  • Providing certificates for authentication in a browser,  e.g. for the K8s dashboard, is hard.

While this solution would give us individually identifiable users, it didn’t seem to be very user-friendly and so I decided to try to find an easier solution.

OpenID Connect (OIDC)

OIDC is Kubernetes’ answer to Single Sign-On. But, and this is a big but, there are very few providers out there who currently support OIDC. While this option initially looked bleak, it did look as though we may be able to get away without creating a new login for every engineer, and as such, this is the path my team chose to investigate further.

What is Open ID Connect?

At Pusher, we use Google’s G Suite to host our emails, therefore every engineer has a login they could use if we could build a Single Sign-On backed by Google. Since Google support OIDC as part of their platform, we decided to investigate what OIDC is and how it works.

OpenID Connect is based on OAuth 2.0. It is designed with more of an authentication focus in mind however. The explicit purpose of OIDC is to generate what is known as an id-token.

This id-token takes the form of a JSON Web Token or JWT (pronounced jot) and may look something like this:

This string actually comes in three parts, each Base64 encoded JSON. The first part provides metadata for the token. The second part provides the identity information, this is known as the payload. The third part is the signature, which is used to verify that the token was issued by a trusted party.

If you decode the payload, it might look something like this:

The payload contains information to identify the user who initiated the OIDC login flow. It will normally contain their name and their email but may also include extra information such as their group membership.

The normal process of generating these tokens is much the same as it is in OAuth 2.0:

  1. User hits the sign in button on the website,
  2. The website redirects them to the Identity Provider,
  3. Browser loads the Identity Provider login screen,
  4. User logs in using their username and password,
  5. Identity Provider redirects them back to the website with an authentication code in the query string,
  6. Browser loads website with the authentication code in query string,
  7. Website server exchanges the code for the ID token.

Once the server has this token, it can either use it to authenticate the user itself or it can provide it back to the user such that they can provide it to other services that trust the identity provider.

Kubernetes itself does not provide any sort of login website for OIDC authentication. It only consumes the tokens once you have retrieved them from some other means. This may leave you wondering where the trust relationship is formed between Kubernetes and the Identity Provider.

As mentioned earlier, the third part of the token is a signature, every ID token generated by an OIDC provider comes signed with a cryptographic key (usually RS256, generated and rotated periodically by the provider). By providing Kubernetes with the URL of the OIDC provider, Kubernetes can retrieve the public half of this key and verify that the token was indeed signed by the OIDC provider. At this point, Kubernetes will accept the token and trust the token’s claim as to who the user is.

Limitations of OIDC

While OIDC is a step closer to a “good” login experience, it is not without its limitations.

The id-token once generated cannot be revoked. Much like issuing certificates for auth, the id-token has an expiry time and it will authenticate the user until that time comes. This means the tokens are often only issued for 1 hour, but some providers do support requests for refresh tokens. Refresh tokens can be used (often indefinitely) to grant a new id-token and continue usage of the service.

Another problem is the lack of support, the Kubernetes documentation lists just three providers, if you aren’t using one of Salesforce, Azure AD or Google, then there is no built-in SSO experience.

Introducing Dex

While investigating OIDC, I came across an Open Source product from CoreOS that helps tackle some of these issues.

Dex acts as a middleman in the authentication chain. It becomes the Identify Provider and issuer of ID tokens for Kubernetes but does not itself have any sense of identity. Instead, it allows you to configure an upstream Identity Provider to provide the users’ identity.

As well as any OIDC provider, Dex supports sourcing user information from GitHub, GitLab, SAML, LDAP and Microsoft. Its provider plugins greatly increase the potential for integrating with your existing user management system.

Another advantage that Dex brings is the ability to control the issuance of ID tokens, specifying the lifetime for example. It also makes it possible force your organization to re-authenticate. With Dex, you can easily revoke all tokens but there is no way to revoke a single token.

Dex also handles refresh tokens for users. When a user logs in to Dex they may be granted an id-token and a refresh token. Programs such as kubectl can use these refresh tokens to re-authenticate the user when the id-token expires. Since these tokens are issued by Dex, this allows you to stop a particular user refreshing by revoking their refresh token. This is really useful in the case of a lost laptop or phone.

Furthermore, by having a central authentication system such as Dex, you need only configure the upstream provider once. We have a set up whereby our upstream Identity Provider is aware only of Dex. Dex then has multiple clients for authenticating users to internal websites and the Kubernetes APIs on our clusters in particular.

An advantage of this setup is that if any user wants to add a new service to the SSO system, they only need to open a PR to our Dex configuration. This setup also provides users with a one-button “revoke access” in the upstream identity provider to revoke their access from all of our internal services. Again this comes in very useful in the event of a security breach or lost laptop.

By using Dex as an intermediary identity provider at Pusher, we now have a fine-grained control over the issuance and revocation of our users’ identity tokens. Importantly though, users don’t have yet another identity to manage.


When reviewing our options, my team and I decided that we would indeed use OIDC for our Kubernetes authentication. We liked the idea that we could use our G Suite accounts and thought this would be easier for our engineers than issuing certificates on their arrival. We also liked the control that Dex could give us, not only would it allow us to set token lifetimes to be very short, it would give us control over engineers sessions and, if we needed to, would allow us to log a user out of the cluster.

OIDC brings us a step closer to providing our engineers with a user-friendly login experience and also to allow us to start restricting their access using RBAC.

In my next post, I will explain in more detail about the particular SSO set up at Pusher. I will go into detail about how users generate their own ID tokens and their experience with authentication both on the command line and in the web browser. I will then explain in more detail how you might replicate this experience within your own organization.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.