How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Kubernetes / Networking / Security

Best Practices for Securely Setting up a Kubernetes Cluster

Securely setting up a cluster is just one element that people can learn by becoming a Certified Kubernetes Security Specialist (CKS). CKS certification attests to an individual’s knowledge about cluster hardening, system hardening and Kubernetes supply chain security, among other topics.
Feb 11th, 2021 8:51am by
Featued image for: Best Practices for Securely Setting up a Kubernetes Cluster

David Bisson
David Bisson is an information security writer and security junkie. He's a contributing editor to IBM's Security Intelligence, Tripwire's The State of Security Blog, and a contributing writer to Bora. He also regularly produces written content for Zix and a number of other companies in the digital security space.

Organizations are increasingly looking to containers to fuel their digital transformations. In 2020, Gartner forecasted that worldwide container management revenue would grow from $465.8 million to $944 million four years later. The global research and advisory firm stated that 75% of global organizations would be running containerized applications in production by 2022 — up from less than 30% in 2020.

These predictions testify to the fact that organizations’ use of containers is growing. Depending on the number of containers in deployment, some organizations might not be able to manage their containers manually. Hence the growing demand for container management solutions.

Organizations are turning to Kubernetes in particular. A portable, open-source platform, Kubernetes enables organizations to manage their containerized workloads and services using both declarative configuration and automation. This platform provides several benefits to organizations, as explained in Kubernetes’ documentation:

  • Load balancing. Organizations turn to Kubernetes to make sure their container-supported applications remain up. The risk here is that high network traffic could affect containers’ availability. With Kubernetes, organizations can automatically distribute their network traffic to make sure their applications continue to function.
  • Manage the containers’ deployed states. As part of their container security, organizations need to describe the desired state of their deployed containers. Kubernetes can help in this regard by automatically changing the actual state of organizations’ containers to the specified desired state at a controlled rate.
  • Self-healing. Kubernetes automatically monitors the health of organizations’ deployed containers. In support of this, the platform can restart containers that fail, replace containers, as well as kill containers that don’t meet the requirements as specified in a user-defined environment health check.

All of the above-mentioned benefits owe their existence to what Red Hat describes as a key advantage of Kubernetes: the cluster. Organizations can’t run Kubernetes without running one. Each cluster consists of two elements: a set of nodes, which run the applications and workloads; and a control plane, which helps to maintain the desired state of the cluster. Together, these two components help clusters to schedule and run containers across organizations’ environments.

Security: A Necessity for Reaping the Benefits of Kubernetes

Organizations need to understand how to set up a cluster if they want to reap the full benefits of Kubernetes. As part of these considerations, they need to secure their clusters. Dark Reading notes that administrators could be inadvertently deploying their clusters in an insecure manner. Depending on their nature, those security weaknesses could be opening organizations up to all kinds of digital risks such as data exposure and denial-of-service (DoS) attacks.

With those types of threats in mind, Kubernetes recommends that organizations begin by asking themselves a series of questions to figure out what type of cluster administration would work best for their needs. For instance, organizations need to determine whether they intend to use a hosted Kubernetes cluster or to host their own cluster. They also need to figure out whether their cluster will live in the cloud or on-premises, and if the latter, they need to select a networking model that they’ll apply to their environments.

Organizations can then move on to digging into the security of their clusters’ components. Take nodes, for instance, or the elements that host groups of containers called “pods.” By default, these pods are non-isolated to the extent that they accept traffic from any source. This poses a problem to organizations, as malicious actors could leverage the compromise of a single pod to move laterally to other parts of the Kubernetes environment.

In response, organizations can use a Network Policy to harden their pods. These services select pods within a namespace and reject any connections to those pods that are not allowed within the policy’s specifications. Using Network Policies, administrators can create a default isolation policy for all pods within a namespace by creating a Network Policy that denies all ingress traffic. They can similarly augment their security by creating Network Policies that deny all egress traffic or both ingress and egress traffic for all pods created in a namespace.

Organizations can also look to secure the Kubernetes API server, the front end of the control plane that exposes the Kubernetes API and facilitates interaction between all other components. To prevent malicious actors from gaining access to the API server, organizations can run the command “ps -ef | grep kube-apiserver” in the master node and check to see that the “--authorization-mode” exists and is set to a value that includes Kubernetes Role-Based Access Control (RBAC). This will help administrators to control who can access the Kubernetes environments based on user roles within the organization. Additionally, they can use Admission Control modules to reject unapproved requests to access Kubernetes resources.

How CKS Can Deepen One’s Knowledge of Kubernetes Security

Securely setting up a cluster is just one element that people can learn by becoming a Certified Kubernetes Security Specialist (CKS). Offered by the Cloud Native Computing Foundation (CNCF), CKS certification attests to an individual’s knowledge about cluster hardening, system hardening and Kubernetes supply chain security, among other topics.

In becoming a Certified Kubernetes Security Specialist, candidates can make themselves invaluable members of their organizations’ security workforce. Here’s StackRox with some more information:

The CKS is the third Kubernetes based certification backed by the Cloud Native Computing Foundation (CNCF). CKS will join the existing Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD) programs. All three certifications are online, proctored, performance-based exams that will require solving multiple Kubernetes security tasks from the command line. With the massive investment into Kubernetes over the last five years, these certifications continue to be highly sought after by many seeking technical knowledge about Kubernetes. The CKS focuses specifically on Kubernetes’ security-based features such as role-based access control (RBAC) and network policies and utilizing existing Kubernetes functionality to secure your clusters.

Interested parties can learn more about CKS and what the certification process entails by visiting CNCF’s website here.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.