What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
Super-fast S3 Express storage.
New Graviton 4 processor instances.
Emily Freeman leaving AWS.
I don't use AWS, so none of this will affect me.

Access AWS Services Through a Kubernetes Dual-Stack Cluster

We connected a Kubernetes dual IPv4/IPv6 stack with Amazon Web Services' service APIs
Dec 10th, 2021 10:00am by
Featued image for: Access AWS Services Through a Kubernetes Dual-Stack Cluster
Feature image via Pixabay.

Saurabh Modi
Saurabh Modi is an IT professional with over a decade of experience, ranging from business intelligence, statistical analysis, application development to production support and Kubernetes cloud infrastructure. He's worked with consulting companies to large fintech and corporate companies, using unique and creative solutions to solve problems.

In the first part of this series, “Access AWS Services Through a Kubernetes Dual-Stack Cluster,” we connected a Kubernetes dual IPv4/IPv6 stack with Amazon Web Services‘ service APIs, using AWS-cloud-controller-manager AWS-ccm, using an AWS cloud-provider manifest.

In this second part, we will discuss how to deploy AWS-ccm using a system service file.

Prerequisite: You will need a Kubernetes cluster running on AWS Cloud with k8s dual-stack features enabled.

Note: If you want to use regular IPv4 cluster steps remain the same, but you don’t have to enable the dual-stack feature.

First step: make sure you had Go installed on your machine.

After that, you will have to clone the AWS-cloud-provider repo to build a binary for aws-ccm from the AWS-cloud-provider repository.

/cloud-provider-aws/cmd/aws-cloud-controller-manager$ go build main.go 

I generally copy main.go into aws-ccm.go and build the binary out of it, it’s totally up to you.

go build aws-ccm.go

Second step: we will set up aws-ccm using systemd service file.

Initial steps remain the same which we had discussed in part one.

You will need to make the following changes to your kube-apiserver service file and add these flags if you have not done so:

If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following kube-apiserver flag:


Let’s create the required certificate for the above flags and create a new ca certificate for the front proxy don’t use the one we had used for the API server.

A bunch of certificates will be created:

Don’t forget to reload the file kube-apiserver service file.

sudo systemctl daemon-reload

Third step: Cloud Controller Manager Client Certificate

Generate the cloud-controller-manager client certificate and private key:

Fourth step: cloud-controller-manager Kubernetes configuration file

Generate a kubeconfig file for the cloud-controller-manager service:

Your systemd service file, Cloud-controller-manager.service should look somewhat like this:

Fifth step: create a RBAC file,aws-ccm-rbac.yaml 

Note: in the RBAC file user name should be the same we have in the kubeconfig default context.

Once your data plane is up and running, start the cloud-controller-managerservice after 5-10 seconds you can start your node and make sure to deploy a container native interface (cni) plugin after that so that node is in a ready state otherwise you will get this warning below:

1 node_controller.go:354] Specified Node IP not found in cloudprovider for node "ip-172-31-79-7.ec2.internal"

It won’t error out though, which means aws-ccm node-controller is unable to fetch the node information from the AWS.

The scenario which I’ve tested for aws-ccm manifest remains the same for the AWS-ccm systemd service.

aws-cdm manifest

Load Balancer as a Service

I tried using a network load balancer as a dual-stack deployment by adding the annotation in the service file but it did not work for me, I had to manually change the few settings.

Although the load balancer is provisioned and the instance is active but unhealthy, it’s still work-in-progress.

It’s in my to-do list and probably warrants its own post to discuss it.

IPV4 as Preferred Dual-Stack Service

In the kuard k8s service file if I have IPv4 as preferred IP in order list:

ipFamilyPolicy: PreferDualStack


  - IPv4

  - IPv6

IPV6 as Preferred Dual-Stack Service

And if I’ve IPv6 as preferred IP in order list:

 ipFamilyPolicy: PreferDualStack


  - IPv6

  - IPv4  

So, this is what’s up with AWS cloud-controller-manager in Kubernetes dual-stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Unit.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.