Kubernetes as a Service Using Amazon EKS

Kubernetes is an open source software that helps you manage and deploy your containerized applications.
Kubernetes consists of two major component control planes (which decide where to run your pod) and a worker node (where your workload runs).
As Kubernetes is a complex system, managing these components is challenging, and this is where you can use a Kubernetes as a Service solution like Amazon Elastic Kubernetes Service (EKS).
In this post, we will see how you can set up and manage EKS and take advantage of the native integration of EKS with other AWS services (Amazon CloudWatch, Amazon VPC, etc.).
What Is Amazon Elastic Kubernetes Service (EKS)?
Amazon EKS is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without installing and managing your Kubernetes cluster. AWS takes care of all the heavy lifting like cluster provisioning, performing upgrades and patching. EKS runs the upstream Kubernetes version so you can easily migrate your existing Kubernetes cluster to AWS without changing the codebase. EKS runs your infrastructure to multiple availability zones, eliminating the single point of failure.
Different Components of EKS
An AWS EKS cluster consists of two primary components:
- Control plane consists of nodes that run the Kubernetes software etcd and the Kubernetes API server. AWS takes care of scalability and high availability of the control plane and makes sure two API server nodes and three etcd nodes are always available across three Availability Zones.
- Data plane is where your application/workload runs. It consists of Kubelet and Kube-proxy server.
Amazon EKS will completely manage your control plane, but how much or little control you need to manage your data plane depends on your requirements. AWS gives three options to manage your data plane nodes.
- Unmanaged worker nodes: You will fully manage these yourself.
- Managed node groups: These worker nodes are partially managed by EKS, but you still control your resources.
- AWS Fargate: This will fully take care of managing your worker nodes.
In this competitive cloud market, there are numerous cloud providers that offer services that support using Kubernetes. Here is a comparison to help you pick a cloud provider that meets your needs better.
Comparing Amazon’s EKS, Google’s GKE and Azure AKS
Before selecting a managed Kubernetes service, it’s vital to know the strengths and weaknesses of each. All managed services solve your goal of easily deploying your Kubernetes cluster. The first decision is where your existing workload is running. It might be easier to remain with the cloud provider you already use.
Some comparisons between the three managed services:
- Google Kubernetes Engine (GKE) has been in the market since 2015. Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS) have been available since 2018.
- The GKE control plane is free for one zone cluster; otherwise, it’s $72. The EKS control plane costs you $72, while the AKS control plane is free.
- AKS and GKE have easier setup. EKS setup is slightly complicated but can be simplified using tools like eksctl.
- AWS and GKE can manage your worker nodes using features like Fargate and Autopilot. Currently, AKS doesn’t provide any such feature.
Since you now understand the difference between the three managed offerings, among the primary reasons to use EKS are:
- It is the most widely used Kubernetes-managed service.
- Kubernetes tooling like certificate management and DNS is fully integrated with AWS.
- You can bring your own Amazon Machine Image (AMI).
- Support tools like Terraform and eksctl to quickly set up your EKS cluster.
- Large user community support.
Installing EKS Using eksctl
This section will show how to set up your EKS cluster using eksctl. It is a simple command-line utility that helps you set up and manage the EKS cluster. For more information, check the documentation at https://github.com/weaveworks/eksctl.
Prerequisites
You must fulfill a few prerequisites before installing and setting up the EKS cluster using eksctl.
- Kubectl: kubectl is a command-line tool for working with your Kubernetes cluster. For more info, check the documentation at https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html.
- AWS CLI: AWS CLI is a command-line tool used to interact and work with AWS services. Once installed, you can use the
aws configure
command to set up your AWS CLI. For more info, check the documentation at https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config.
Installing Kubectl on Linux
-
- Download the kubectl binary from your Amazon S3 bucket.
1 2 3 4 5 |
# curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/linux/amd64/kubectl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 44.7M 100 44.7M 0 0 20.1M 0 0:00:02 0:00:02 --:--:-- 20.1M |
- Change the permission to make the binary executable.
1 |
# chmod +x ./kubectl |
- Copy the binary to $PATH so that you don’t need to type the complete path when executing the binary. Optionally, you can add it into your bash profile so that it will be initialized during shell initialization.
1 2 |
# mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin # echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc |
- Verify the version of kubectl installed using the following command:
1 2 |
# kubectl version --short --client Client Version: v1.22.6-eks-7d68063 |
NOTE: To install Kubectl on other platforms like Windows or Mac, check the following documentation: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
Installing eksctl on Linux
- Download the latest release of eksctl and extract it using the following command:
1 |
# curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp |
- Move the downloaded binary to
/usr/local/bin
or to your $PATH definition:
1 |
# sudo mv /tmp/eksctl /usr/local/bin |
- Verify the version of eksctl installed using the following command:
1 2 |
# eksctl version 0.96.0 |
Creating Your EKS cluster
The next step is to create the EKS cluster with all the prerequisites in place. Run the eks cluster
command and pass the following options
- eksctl
create cluster
will create the EKS cluster for you. name
is used to give your EKS cluster name. If you omit this value, eksctl will generate a random name for you.version
will let you specify the Kubernetes version.region
is the name of the region where you want to set up your EKS cluster.nodegroup-name
is the name of the node group.node-type
is the instance type for the node (default value is m5.large).nodes
is the total number of worker nodes (default value is 2).nodes-min
is to specify the minimum number of worker nodes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
eksctl create cluster --name demo-cluster --version 1.22 --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 2022-05-13 02:06:30 [ℹ] eksctl version 0.96.0 2022-05-13 02:06:30 [ℹ] using region us-west-2 2022-05-13 02:06:30 [ℹ] setting availability zones to [us-west-2d us-west-2c us-west-2b] 2022-05-13 02:06:30 [ℹ] subnets for us-west-2d - public:192.168.0.0/19 private:192.168.96.0/19 2022-05-13 02:06:30 [ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19 2022-05-13 02:06:30 [ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19 2022-05-13 02:06:30 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.22] 2022-05-13 02:06:30 [ℹ] using Kubernetes version 1.22 2022-05-13 02:06:30 [ℹ] creating EKS cluster "demo-cluster" in "us-west-2" region with managed nodes 2022-05-13 02:06:30 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup 2022-05-13 02:06:30 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=demo-cluster' 2022-05-13 02:06:30 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "demo-cluster" in "us-west-2" 2022-05-13 02:06:30 [ℹ] CloudWatch logging will not be enabled for cluster "demo-cluster" in "us-west-2" 2022-05-13 02:06:30 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=demo-cluster' 2022-05-13 02:06:30 [ℹ] 2 sequential tasks: { create cluster control plane "demo-cluster", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "standard-workers", } } 2022-05-13 02:06:30 [ℹ] building cluster stack "eksctl-demo-cluster-cluster" 2022-05-13 02:06:31 [ℹ] deploying stack "eksctl-demo-cluster-cluster" 2022-05-13 02:07:01 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:07:31 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:08:31 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:09:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:10:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:11:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:12:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:13:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:14:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:15:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:16:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:17:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:18:35 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" 2022-05-13 02:20:37 [ℹ] building managed nodegroup stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:20:38 [ℹ] deploying stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:20:38 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:21:08 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:21:49 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:22:48 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:24:35 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" 2022-05-13 02:24:35 [ℹ] waiting for the control plane availability... 2022-05-13 02:24:35 [✔] saved kubeconfig as "/root/.kube/config" 2022-05-13 02:24:35 [ℹ] no tasks 2022-05-13 02:24:35 [✔] all EKS cluster resources for "demo-cluster" have been created 2022-05-13 02:24:35 [ℹ] nodegroup "standard-workers" has 3 node(s) 2022-05-13 02:24:35 [ℹ] node "ip-192-168-19-59.us-west-2.compute.internal" is ready 2022-05-13 02:24:35 [ℹ] node "ip-192-168-47-155.us-west-2.compute.internal" is ready 2022-05-13 02:24:35 [ℹ] node "ip-192-168-92-182.us-west-2.compute.internal" is ready 2022-05-13 02:24:35 [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" 2022-05-13 02:24:35 [ℹ] nodegroup "standard-workers" has 3 node(s) 2022-05-13 02:24:35 [ℹ] node "ip-192-168-19-59.us-west-2.compute.internal" is ready 2022-05-13 02:24:35 [ℹ] node "ip-192-168-47-155.us-west-2.compute.internal" is ready 2022-05-13 02:24:35 [ℹ] node "ip-192-168-92-182.us-west-2.compute.internal" is ready 2022-05-13 02:24:38 [ℹ] kubectl command should work with "/root/.kube/config", try 'kubectl get nodes' 2022-05-13 02:24:38 [✔] EKS cluster "demo-cluster" in "us-west-2" region is ready |
- You can verify if the EKS cluster is up by executing the command below.
1 2 3 4 5 |
# eksctl get cluster -r us-west-2 2022-05-13 02:26:00 [ℹ] eksctl version 0.96.0 2022-05-13 02:26:00 [ℹ] using region us-west-2 NAMEREGIONEKSCTL CREATED demo-clusterus-west-2True |
- To update the
kubeconfig
file to use your newly created EKS cluster as the current context, run the following command:
1 2 |
# aws eks update-kubeconfig --name demo-cluster --region us-west-2 Added new context arn:aws:eks:us-west-2:123456789:cluster/demo-cluster to /root/.kube/config |
- To verify if the worker nodes are up and running, use the following:
1 2 3 4 5 |
# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-19-59.us-west-2.compute.internal Ready <none> 7m24s v1.22.6-eks-7d68063 ip-192-168-47-155.us-west-2.compute.internal Ready <none> 7m27s v1.22.6-eks-7d68063 ip-192-168-92-182.us-west-2.compute.internal Ready <none> 7m25s v1.22.6-eks-7d68063 |
- Deploy your application:
1 2 |
# kubectl create deployment my-demo-deploy --image=nginx --replicas=3 deployment.apps/my-demo-deploy created |
- Verify if it’s deployed in different nodes in your cluster by using -o wide option:
1 2 3 4 5 |
# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-demo-deploy-85d855f586-d9chq 1/1 Running 0 29s 192.168.92.101 ip-192-168-92-182.us-west-2.compute.internal <none> <none> my-demo-deploy-85d855f586-kqr8n 1/1 Running 0 29s 192.168.53.46 ip-192-168-47-155.us-west-2.compute.internal <none> <none> my-demo-deploy-85d855f586-x7bjj 1/1 Running 0 29s 192.168.18.111 ip-192-168-19-59.us-west-2.compute.internal <none> <none> |
Amazon CloudWatch Container Insight
- Amazon EKS is integrated with other AWS services like Amazon CloudWatch to collect metrics and logs for your containerized application: Amazon CloudWatch container insight collects, summarizes and aggregates metrics and logs from your containerized microservices and applications. These metrics include CPU, memory, network and disk utilization. It also helps you provide diagnostic information such as container restart failure to isolate and resolve these issues quickly.
- It runs as a containerized version of the CloudWatch agent to discover all the running containers. Also, it runs as a daemonset as a log collector with a CloudWatch plugin on each node in the cluster. It then creates aggregate metrics by collecting all performance data.
Installing CloudWatch Container Insight
At this stage, your EKS cluster is up and running. The next step is to install CloudWatch container insight to collect your metrics. But first, ensure that the identity and access management (IAM) policy is attached to your instance. In this case, you need a CloudWatch Full Access policy for your worker node so it can push metrics to CloudWatch.

Figure 1: EKS Worker node EC2 console

Figure 2: IAM console with policies attached to worker nodes

Figure 3: Attaching CloudWatchFullAccess policy to worker nodes
- Deploy CloudWatch container insight by running the following command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/demo-cluster/;s/{{region_name}}/us-west-2/" | kubectl apply -f - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15896 100 15896 0 0 320k 0 --:--:-- --:--:-- --:--:-- 323k namespace/amazon-cloudwatch created serviceaccount/cloudwatch-agent created clusterrole.rbac.authorization.k8s.io/cloudwatch-agent-role created clusterrolebinding.rbac.authorization.k8s.io/cloudwatch-agent-role-binding created configmap/cwagentconfig created daemonset.apps/cloudwatch-agent created configmap/cluster-info created serviceaccount/fluentd created clusterrole.rbac.authorization.k8s.io/fluentd-role created clusterrolebinding.rbac.authorization.k8s.io/fluentd-role-binding created configmap/fluentd-config created daemonset.apps/fluentd-cloudwatch created |
- Verify that CloudWatch and Fluentd pods are created among the amazon-cloudwatch name; run the following command:
1 2 3 4 5 6 7 8 9 10 11 |
# kubectl get all -n amazon-cloudwatch NAME READY STATUS RESTARTS AGE pod/cloudwatch-agent-5295c 1/1 Running 0 4m54s pod/cloudwatch-agent-jvxsl 1/1 Running 0 4m54s pod/cloudwatch-agent-nncjk 1/1 Running 0 4m54s pod/fluentd-cloudwatch-6q5m5 1/1 Running 0 4m51s pod/fluentd-cloudwatch-9qp6f 1/1 Running 0 4m51s pod/fluentd-cloudwatch-f8kqd 1/1 Running 0 4m51s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/cloudwatch-agent 3 3 3 3 3 <none> 4m54s daemonset.apps/fluentd-cloudwatch 3 3 3 3 3 <none> 4m51s |
-
- Once the CloudWatch container insight is configured, you can see various metrics like CPU, memory, disk and network statistics across your EKS cluster.
- Go to the CloudWatch dashboard https://console.aws.amazon.com/cloudwatch/home; under Container Insights, click on “Performance monitoring” to view the various metrics.

Figure 4: CloudWatch container insight console to view EKS cluster metrics

Figure 5: CloudWatch container insight console to view EKS node metrics
Amazon Virtual Private Cloud (VPC) for Pod Networking Using VPC CNI Plugin
AWS EKS supports virtual private cloud (VPC) networking using the AWS VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin, Kubernetes pods have the same IP as on the VPC network. For more information, check the following link: https://github.com/aws/amazon-vpc-cni-k8s
-
-
- CNI uses EC2 to provision multiple Elastic Network Interfaces (ENIs) to a host instance, and thus each interface will get multiple IPs from the VPC pool. It then assigns these IPs to the pod, connects the ENI to the VETH (virtual Ethernet) port created by a pod, and finally, the Linux kernel takes care of the rest. The advantage of this approach is that each pod will have the real routable IP address allocated from VPC and can communicate with other pods and AWS services.
- To implement the network policies, EKS uses the Calico plugin. A calico node agent is deployed in each node in the cluster. This helps to route information propagated among all the nodes in the cluster using Border Gateway Protocol (BGP).
-
Identity and Access Management (IAM) for Role-Based Access Control
For EKS, authorization is managed by role-based access control (RBAC) for Kubernetes commands, but for AWS commands, identity and access management (IAM) manages both authentication and authorization. EKS is tightly integrated with the IAM authenticator service, which uses IAM credentials to authenticate the Kubernetes cluster. This greatly helps to avoid managing separate credentials for Kubernetes access. Once an identity is authenticated, RBAC is used for authorization. Here is the step-by-step procedure:
-
-
- Suppose you made a kubectl call to get pod. In this case, your IAM identity is passed along with the Kubernetes call.
- Kubernetes verifies the IAM identity by using the authenticator tool.
- Authenticator token-based response is passed back to Kubernetes.
- Kubernetes checks RBAC for authorization. This is where pod calls are either allowed or denied.
- Kubernetes API either allows or denies the request.
-
Amazon Elastic Container Registry (ECR) Repository
Amazon Elastic Container Registry (ECR) Repository is a fully managed registry to store container images. Every AWS account comes with a single (default) ECR registry, but you can create one or more registries to store container images. ECR is well-integrated with other AWS services like identity and access management (IAM), and you can use it to set permissions to control access. You can use ECR to store other artifacts like Helm charts.
To push your image to ECR Repository, follow these steps
-
-
- Authenticate your docker client to your registry by retrieving the authentication token:
-
1 |
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin1234567890.dkr.ecr.us-west-2.amazonaws.com |
NOTE: The authentication token is valid only for 12 hours from when it is issued.
-
-
- Build your Docker image by running the below command. Skip this step if your image is already built. For more information on creating a Dockerfile from scratch, check the following link:
-
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html
1 |
docker build -t my-eks-repo . |
-
-
- Tag the image so that you can push it to the repository.
-
1 |
docker tag my-eks-repo:latest1234567890.dkr.ecr.us-west-2.amazonaws.com/my-eks-repo:latest |
NOTE: 1234567890 is your AWS account ID. Replace it with your account ID:
-
-
- Push the newly created image to the ECR registry.
-
1 |
docker push 1234567890.dkr.ecr.us-west-2.amazonaws.com/my-eks-repo:latest |
Conclusion
Amazon EKS is one of the most widely used Kubernetes-managed services. In this post, you learned how to set up an EKS cluster and deploy your workload. One of the primary advantages of using EKS is its integration with AWS services like identity management and virtual private cloud. Also, AWS will take care of all the heavy lifting like patching, performing an upgrade and provisioning your cluster. On top of that, if you use AWS offerings like Fargate, AWS will manage your worker node.
Plug: Use K8s with Squadcast for Faster Resolution
Squadcast is an incident management tool that’s purpose-built for site reliability engineering. It allows you to get rid of unwanted alerts, receive relevant notifications and integrate with popular ChatOps tools. You also can work in collaboration using virtual incident war rooms and use automation to eliminate toil.