Containers / Microservices / Security / Contributed

Public Key Infrastructure Needs to Evolve to Support Cloud Native Computing

1 May 2020 10:55am, by

Cloud this, cloud that — it becomes a cliché. And, still, for some aspects of cloud security, the kinks are still getting worked out — public key infrastructure (PKI) is one of them.

One of the questions at hand is how PKI fits into microservices-based architectures and the cloud picture?

Without public key cryptography and PKI, we wouldn’t be able to go to our favorite web store to buy our next-generation gadgets. However, PKI was established in the mid-‘70s and mostly finished its evolution by the end of the ‘90s. The question — how relevant it is today?

The Basics of PKI

Ben Hirschberg
Ben Hirschberg is vice president of R&D and co-founder of Cyber Armor, which empowers DevOps to seamlessly deploy zero-trust workload and data protection across environments, eliminating the friction between development and security

The general idea of PKI is every computing entity has its own cryptographic identity, comprised of two elements:

First, a private key — assumed to be available only to that given computing entity.

Second, a public certificate, which contains a public key as well as other information about the entity: name, expiration data, etc. The certificate is digitally signed by a trusted certificate authority; thus, we can prove that indeed the name and information are connected to the right entity. This is pretty much like your driver’s license validated by the state.

This private key/public key pair is at the core of the PKI authentication protocol. When an entity needs to be authenticated, two things happen:

  1. The verifier can validate the public certificate by checking its signature against the authorized certificate authority.
  2. The verifier gets proof that the entity actually has the private key. The protocol assures that this can be done without sending the actual private key over the network.

In the last 20 years, not much has changed in PKI’s cryptographic paradigm. Algorithms, like RSA and hash functions, have proven records and are still providing a solid infrastructure — even though we use bigger key sizes and more complex hash functions.

Recent changes in architecture, such as the introduction of microservices, introduce new challenges to the traditional PKI paradigm.

PKI Challenges

The main challenge of PKI itself is the management and protection of keys and certificates. When preparing an entity for PKI, the following steps are required:

  • Generating private-public key pair
  • Creating a certificate signing request
  • Request the signing authority to sign the certificate

After completing these steps, a given machine could start using PKI identification. This is the prerequisite for the most secured protocols on computer networks given that SSL/TLS identity trust is based upon PKI (X.509).

In microservices, where we can have hundreds of entities to be authenticated, distributing these pair of keys securely and efficiently can become a challenge.

On top of that, for a computing entity to identify itself using PKI, it needs not only to hold a private key but also be able to protect it. This private key must be kept secret; if it is stolen, the attacker can impersonate the entity.

PKI has a feature called certificate revocation, which theoretically enables the notification of other PKI users about a compromised identity. However, it is not well implemented and maintained and is impossible to manage over hundreds of thousands of microservices worldwide

The Cloudification of Infrastructure

Let’s look at today’s infrastructure. With the advent of the cloud, the server as a “hardware” component has started to disappear. Now, servers are simply the bare metal running cloud apps, database nodes, or microservices.

Container technologies and their management systems, combined with simplified access to computing infrastructure, has brought in a new age. Instead of monolithic servers, we create smaller and self-contained services that can be scaled out. One physical machine can run tens of services, while one service cluster can be spread out across tens or even hundreds of machines. Where we run and the physical infrastructure are now commodities, just like the electricity powering them.

Using Old Security in the New Environment

We still want to connect our services and nodes with TLS. Although some TLS problems do arise from time to time, like downgrade attacks, generally speaking, it is considered a secure channel.

Now, consider the protection for service “A,” which publishes a REST API. The TLS requirements haven’t changed — service “A” needs to have private key or at least have access to private key operations. It needs to have a certificate signed by a certificate authority and a server name that binds the certificate to a network identifier.

If you use a private key directly, the easiest way is to add the private key and the certificate directly to the container image of service “A.” That isn’t the wisest idea — you’ve just added the root secret of the PKI identity to the image, making it as sensitive as the private key itself.

Instead of protecting a ~1Kb key, you now need to keep a complete container image secret, which can easily weigh more than 100Mb. This also means the given container image is bound to one and only one cryptographic identity; this might be good for in-house distribution but not good for multideployment images, which is the current practice.

The Weak External Mount Approach

An alternative solution is mounting the private key in the container from an external volume. Therefore, the deploying entity needs to create the private key and certificate and add it to every container host where this image is to be run. The problem is that it breaks the rule of self-containment of the images; it also means the private key is deployed on the same machine as other, potentially unrelated, docker images are running.

To add a new worker node to your Kubernetes cluster, you need to install all the private keys and certificates ahead of time; otherwise Kubernetes won’t be able to spawn new container instances. This is untenable for containers-as-a-service use cases.

In this case, another entity holds your server’s cryptographic identity; the entire defense is based on file system permission flags, which means the protection is very thin.

An Environment Variable Isn’t Great Either

Private keys and certificates can be injected into an environment variable. Although it has specific advantages from the deployment point of view — it’s self-contained, and the image stays generic — it has its own security problems.

Environment variables are defined in scripts, YAML files, and other orchestration tools, so issues can exist with the underlying container technology implementation itself. One environment variable is shared between linked containers. Secondly, the environment variables can be viewed from the host machine by other processes as well as all processes within the same container, which makes protection very thin.

The Expensive Hardware Solution to Software Security

Key management systems (KMS) “as is” do not add much to key protection. They do deliver a way to store the keys outside the container images, providing them at the application level to the running container through an API. The request, though, requires some kind of a token — credential or API key. Therefore, to retrieve a secret, you need to keep another secret within your container, creating a perpetual security risk.

In cloud environments, just as on-prem, a hardware security module (HSM) — a dedicated computer containing advanced hardware protections — can keep the private key. Instead of revealing the key, it provides cryptographic operations using the secret keys. In theory, attackers cannot access the key itself; the key does have access to operations using it. However, HSM also authenticates requests to make crypto operations using a security token, so again, some secret needs to be embedded in the container image. Furthermore, HSM is not easily scalable, and it’s expensive — you need to deploy enough HSM devices to serve your applications.

Solving the Security Problem with Microservices Identities

In this solution, a system is used to add software identity to the solution arsenal, cryptographically validating the software it protects, creating a software identity around it. The self-contained key distribution system enables the software to be scaled freely while removing the headache of individual key distribution and protection from the DevOps equation of any software.

Every software instance gets a unique and secure cryptographic identity regardless of where and how it is being run. This identity is verified by runtime protection mechanisms and proven to the backend, which then assigns the crypto material to the software process. The crypto material is never passed to untrusted software, and the software process is constantly monitored against attacks.

It delivers a more robust protection across hybrid and cloud environments without requiring additional hardware investments.

As cloud native technologies evolve, the security infrastructure needs to keep up, if not get ahead of the game, to ensure that data and workloads are always protected. PKI just isn’t going to cut it anymore.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.