Cloud Services / Kubernetes / Technology / Sponsored / Contributed

Kubernetes as a Service Using Amazon EKS

9 Jun 2022 8:00am, by

Roshan Shetty
Roshan is a site reliability engineer at Squadcast. He is an open source enthusiast and mostly focuses on building tools to solve enterprise reliability problems. He also loves contributing to various open source projects.

Kubernetes is an open source software that helps you manage and deploy your containerized applications.

Kubernetes consists of two major component control planes (which decide where to run your pod) and a worker node (where your workload runs).

As Kubernetes is a complex system, managing these components is challenging, and this is where you can use a Kubernetes as a Service solution like Amazon Elastic Kubernetes Service (EKS).

In this post, we will see how you can set up and manage EKS and take advantage of the native integration of EKS with other AWS services (Amazon CloudWatch, Amazon VPC, etc.).

What Is Amazon Elastic Kubernetes Service (EKS)?

Amazon EKS is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without installing and managing your Kubernetes cluster. AWS takes care of all the heavy lifting like cluster provisioning, performing upgrades and patching. EKS runs the upstream Kubernetes version so you can easily migrate your existing Kubernetes cluster to AWS without changing the codebase. EKS runs your infrastructure to multiple availability zones, eliminating the single point of failure.

Different Components of EKS

An AWS EKS cluster consists of two primary components:

  • Control plane consists of nodes that run the Kubernetes software etcd and the Kubernetes API server. AWS takes care of scalability and high availability of the control plane and makes sure two API server nodes and three etcd nodes are always available across three Availability Zones.
  • Data plane is where your application/workload runs. It consists of Kubelet and Kube-proxy server.

Amazon EKS will completely manage your control plane, but how much or little control you need to manage your data plane depends on your requirements. AWS gives three options to manage your data plane nodes.

  • Unmanaged worker nodes: You will fully manage these yourself.
  • Managed node groups: These worker nodes are partially managed by EKS, but you still control your resources.
  • AWS Fargate: This will fully take care of managing your worker nodes.

In this competitive cloud market, there are numerous cloud providers that offer services that support using Kubernetes. Here is a comparison to help you pick a cloud provider that meets your needs better.

Comparing Amazon’s EKS, Google’s GKE and Azure AKS

Before selecting a managed Kubernetes service, it’s vital to know the strengths and weaknesses of each. All managed services solve your goal of easily deploying your Kubernetes cluster. The first decision is where your existing workload is running. It might be easier to remain with the cloud provider you already use.

Some comparisons between the three managed services:

  • Google Kubernetes Engine (GKE) has been in the market since 2015. Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS) have been available since 2018.
  • The GKE control plane is free for one zone cluster; otherwise, it’s $72. The EKS control plane costs you $72, while the AKS control plane is free.
  • AKS and GKE have easier setup. EKS setup is slightly complicated but can be simplified using tools like eksctl.
  • AWS and GKE can manage your worker nodes using features like Fargate and Autopilot. Currently, AKS doesn’t provide any such feature.

Since you now understand the difference between the three managed offerings, among the primary reasons to use EKS are:

  • It is the most widely used Kubernetes-managed service.
  • Kubernetes tooling like certificate management and DNS is fully integrated with AWS.
  • You can bring your own Amazon Machine Image (AMI).
  • Support tools like Terraform and eksctl to quickly set up your EKS cluster.
  • Large user community support.

Installing EKS Using eksctl

This section will show how to set up your EKS cluster using eksctl. It is a simple command-line utility that helps you set up and manage the EKS cluster. For more information, check the documentation at https://github.com/weaveworks/eksctl.

Prerequisites

You must fulfill a few prerequisites before installing and setting up the EKS cluster using eksctl.

Installing Kubectl on Linux

    • Download the kubectl binary from your Amazon S3 bucket.

  • Change the permission to make the binary executable.

  • Copy the binary to $PATH so that you don’t need to type the complete path when executing the binary. Optionally, you can add it into your bash profile so that it will be initialized during shell initialization.

  • Verify the version of kubectl installed using the following command:

NOTE: To install Kubectl on other platforms like Windows or Mac, check the following documentation: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

Installing eksctl on Linux

  • Download the latest release of eksctl and extract it using the following command:

  • Move the downloaded binary to /usr/local/bin or to your $PATH definition:

  • Verify the version of eksctl installed using the following command:

Creating Your EKS cluster

The next step is to create the EKS cluster with all the prerequisites in place. Run the eks cluster command and pass the following options

  • eksctl create cluster will create the EKS cluster for you.
  • name is used to give your EKS cluster name. If you omit this value, eksctl will generate a random name for you.
  • version will let you specify the Kubernetes version.
  • region is the name of the region where you want to set up your EKS cluster.
  • nodegroup-name is the name of the node group.
  • node-type is the instance type for the node (default value is m5.large).
  • nodes is the total number of worker nodes (default value is 2).
  • nodes-min is to specify the minimum number of worker nodes.

  • You can verify if the EKS cluster is up by executing the command below.

  • To update the kubeconfig file to use your newly created EKS cluster as the current context, run the following command:

  • To verify if the worker nodes are up and running, use the following:

  • Deploy your application:

  • Verify if it’s deployed in different nodes in your cluster by using -o wide option:

Amazon CloudWatch Container Insight

  • Amazon EKS is integrated with other AWS services like Amazon CloudWatch to collect metrics and logs for your containerized application: Amazon CloudWatch container insight collects, summarizes and aggregates metrics and logs from your containerized microservices and applications. These metrics include CPU, memory, network and disk utilization. It also helps you provide diagnostic information such as container restart failure to isolate and resolve these issues quickly.
  • It runs as a containerized version of the CloudWatch agent to discover all the running containers. Also, it runs as a daemonset as a log collector with a CloudWatch plugin on each node in the cluster. It then creates aggregate metrics by collecting all performance data.

Installing CloudWatch Container Insight

At this stage, your EKS cluster is up and running. The next step is to install CloudWatch container insight to collect your metrics. But first, ensure that the identity and access management (IAM) policy is attached to your instance. In this case, you need a CloudWatch Full Access policy for your worker node so it can push metrics to CloudWatch.

Figure 1: EKS Worker node EC2 console

 

Figure 2: IAM console with policies attached to worker nodes

 

Figure 3: Attaching CloudWatchFullAccess policy to worker nodes

  • Deploy CloudWatch container insight by running the following command:

  • Verify that CloudWatch and Fluentd pods are created among the amazon-cloudwatch name; run the following command:

    • Once the CloudWatch container insight is configured, you can see various metrics like CPU, memory, disk and network statistics across your EKS cluster.
    • Go to the CloudWatch dashboard https://console.aws.amazon.com/cloudwatch/home; under Container Insights, click on “Performance monitoring” to view the various metrics.

Figure 4: CloudWatch container insight console to view EKS cluster metrics

 

Figure 5: CloudWatch container insight console to view EKS node metrics

 

Amazon Virtual Private Cloud (VPC) for Pod Networking Using VPC CNI Plugin

AWS EKS supports virtual private cloud (VPC) networking using the AWS VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin, Kubernetes pods have the same IP as on the VPC network. For more information, check the following link: https://github.com/aws/amazon-vpc-cni-k8s

      • CNI uses EC2 to provision multiple Elastic Network Interfaces (ENIs) to a host instance, and thus each interface will get multiple IPs from the VPC pool. It then assigns these IPs to the pod, connects the ENI to the VETH (virtual Ethernet) port created by a pod, and finally, the Linux kernel takes care of the rest. The advantage of this approach is that each pod will have the real routable IP address allocated from VPC and can communicate with other pods and AWS services.
      • To implement the network policies, EKS uses the Calico plugin. A calico node agent is deployed in each node in the cluster. This helps to route information propagated among all the nodes in the cluster using Border Gateway Protocol (BGP).

Identity and Access Management (IAM) for Role-Based Access Control

For EKS, authorization is managed by role-based access control (RBAC) for Kubernetes commands, but for AWS commands, identity and access management (IAM) manages both authentication and authorization. EKS is tightly integrated with the IAM authenticator service, which uses IAM credentials to authenticate the Kubernetes cluster. This greatly helps to avoid managing separate credentials for Kubernetes access. Once an identity is authenticated, RBAC is used for authorization. Here is the step-by-step procedure:

      • Suppose you made a kubectl call to get pod. In this case, your IAM identity is passed along with the Kubernetes call.
      • Kubernetes verifies the IAM identity by using the authenticator tool.
      • Authenticator token-based response is passed back to Kubernetes.
      • Kubernetes checks RBAC for authorization. This is where pod calls are either allowed or denied.
      • Kubernetes API either allows or denies the request.

Amazon Elastic Container Registry (ECR) Repository

Amazon Elastic Container Registry (ECR) Repository is a fully managed registry to store container images. Every AWS account comes with a single (default) ECR registry, but you can create one or more registries to store container images. ECR is well-integrated with other AWS services like identity and access management (IAM), and you can use it to set permissions to control access. You can use ECR to store other artifacts like Helm charts.

To push your image to ECR Repository, follow these steps

      • Authenticate your docker client to your registry by retrieving the authentication token:

NOTE: The authentication token is valid only for 12 hours from when it is issued.

      • Build your Docker image by running the below command. Skip this step if your image is already built. For more information on creating a Dockerfile from scratch, check the following link:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html

      • Tag the image so that you can push it to the repository.

NOTE: 1234567890 is your AWS account ID. Replace it with your account ID:

      • Push the newly created image to the ECR registry.

Conclusion

Amazon EKS is one of the most widely used Kubernetes-managed services. In this post, you learned how to set up an EKS cluster and deploy your workload. One of the primary advantages of using EKS is its integration with AWS services like identity management and virtual private cloud. Also, AWS will take care of all the heavy lifting like patching, performing an upgrade and provisioning your cluster. On top of that, if you use AWS offerings like Fargate, AWS will manage your worker node.

Plug: Use K8s with Squadcast for Faster Resolution

Squadcast is an incident management tool that’s purpose-built for site reliability engineering. It allows you to get rid of unwanted alerts, receive relevant notifications and integrate with popular ChatOps tools. You also can work in collaboration using virtual incident war rooms and use automation to eliminate toil.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Feature image provided by sponsor