Access AWS Services Through a Kubernetes Dual-Stack Cluster

In the first part of this series, “Access AWS Services Through a Kubernetes Dual-Stack Cluster,” we connected a Kubernetes dual IPv4/IPv6 stack with Amazon Web Services‘ service APIs, using AWS-cloud-controller-manager AWS-ccm
, using an AWS cloud-provider manifest.
In this second part, we will discuss how to deploy AWS-ccm
using a system service file.
Prerequisite: You will need a Kubernetes cluster running on AWS Cloud with k8s dual-stack features enabled.
Note: If you want to use regular IPv4 cluster steps remain the same, but you don’t have to enable the dual-stack feature.
First step: make sure you had Go installed on your machine.
After that, you will have to clone the AWS-cloud-provider repo to build a binary for aws-ccm
from the AWS-cloud-provider repository.
/cloud-provider-aws/cmd/aws-cloud-controller-manager$ go build main.go
I generally copy main.go
into aws-ccm.go
and build the binary out of it, it’s totally up to you.
go build aws-ccm.go
Second step: we will set up aws-ccm
using systemd
service file.
Initial steps remain the same which we had discussed in part one.
You will need to make the following changes to your kube-apiserver service file and add these flags if you have not done so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
--requestheader-client-ca-file= front-proxy-ca.pem \ --requestheader-allowed-names=front-proxy-client \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --proxy-client-cert-file=proxy-client.pem \ --proxy-client-key-file=proxy-client-key.pem \ --enable-aggregator-routing=true |
If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following kube-apiserver
flag:
--enable-aggregator-routing=true
Let’s create the required certificate for the above flags and create a new ca certificate for the front proxy don’t use the one we had used for the API server.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
sudo cat >> front-proxy-ca-config.json < front-proxy-ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "Country", "ST": "State", "L": "Location", "O": "Kubernetes", "OU": "System" } ] } EOF sudo cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca sudo cat > proxy-client-csr.json < |
A bunch of certificates will be created:
Don’t forget to reload the file kube-apiserver service file.
sudo systemctl daemon-reload
Third step: Cloud Controller Manager Client Certificate
Generate the cloud-controller-manager client certificate and private key:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
sudo cat > cloud-controller-manager-csr.json <<EOF { "CN": "cloud-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "Country", "ST": "State", "L": "Location", "O": "cloud-controller-manager", "OU": "System" } ] } EOF sudo cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ cloud-controller-manager-csr.json | cfssljson -bare cloud-controller-manager |
Fourth step: cloud-controller-manager Kubernetes configuration file
Generate a kubeconfig file for the cloud-controller-manager service:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://[::1]:6443 \ --kubeconfig=cloud-controller-manager.kubeconfig kubectl config set-credentials cloud-controller-manager \ --client-certificate=cloud-controller-manager.pem \ --client-key=cloud-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=cloud-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=cloud-controller-manager \ --kubeconfig=cloud-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=cloud-controller-manager.kubeconfig |
Your systemd service file, Cloud-controller-manager.service
should look somewhat like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[Unit] Description=AWS Cloud Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/your-aws-binary-location/aws-cloud-controller-manager \ --cluster-name=kubernetes \ --cloud-provider=aws \ --authentication-kubeconfig=/file-path/cloud-controller-manager.kubeconfig \ --authorization-kubeconfig=/file-path/cloud-controller-manager.kubeconfig \ --kubeconfig=/file-path/cloud-controller-manager.kubeconfig \ --allocate-node-cidrs=true \ --requestheader-client-ca-file=/file-path/front-proxy-ca.pem \ --requestheader-allowed-names="front-proxy-client" \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --client-ca-file=/file-path/ca.pem \ --cloud-config=/file-path/cloud-config.conf \ --configure-cloud-routes=false \ --leader-elect=true \ --leader-elect-lease-duration="15s" \ --leader-elect-renew-deadline="10s" \ --leader-elect-resource-lock="leases" \ --leader-elect-resource-name="cloud-controller-manager" \ --leader-elect-retry-period="2s" \ --use-service-account-credentials="true" \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target |
Fifth step: create a RBAC file,aws-ccm-rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
--- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cloud-controller-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cloud-controller-manager subjects: - kind: User name: cloud-controller-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cloud-controller-manager rules: - apiGroups: - "" resources: - events verbs: - create - patch - update - apiGroups: - "" resources: - serviceaccounts - serviceaccounts/token - configmaps - endpoints - namespaces - secrets verbs: - create - get - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - list - update - watch - apiGroups: - "" resourceNames: - node-controller - service-controller - route-controller resources: - serviceaccounts/token - secrets verbs: - create - get - apiGroups: - "" resources: - events verbs: - create - patch - update - apiGroups: - "" resources: - serviceaccounts verbs: - create - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - list - update - watch - apiGroups: - "" resourceNames: - node-controller - service-controller - route-controller resources: - serviceaccounts/token verbs: - create - apiGroups: - "" resources: - persistentvolumes - services - secrets - endpoints - serviceaccounts verbs: - get - list - watch - create - update - patch - apiGroups: - "" resources: - nodes verbs: - get - list - watch - delete - patch - update - apiGroups: - "" resources: - services/status verbs: - update - patch - apiGroups: - "" resources: - nodes/status verbs: - patch - update - apiGroups: - "" resources: - events - endpoints verbs: - create - patch - update |
Note: in the RBAC file user name should be the same we have in the kubeconfig default context.
Once your data plane is up and running, start the cloud-controller-manager
service after 5-10 seconds you can start your node and make sure to deploy a container native interface (cni) plugin after that so that node is in a ready state otherwise you will get this warning below:
1 node_controller.go:354] Specified Node IP not found in cloudprovider for node "ip-172-31-79-7.ec2.internal"
It won’t error out though, which means aws-ccm node-controller
is unable to fetch the node information from the AWS.
The scenario which I’ve tested for aws-ccm
manifest remains the same for the AWS-ccm systemd service.
Load Balancer as a Service
I tried using a network load balancer as a dual-stack deployment by adding the annotation in the service file but it did not work for me, I had to manually change the few settings.
Although the load balancer is provisioned and the instance is active but unhealthy, it’s still work-in-progress.
It’s in my to-do list and probably warrants its own post to discuss it.
IPV4 as Preferred Dual-Stack Service
In the kuard k8s service file if I have IPv4 as preferred IP in order list:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv4
- IPv6
IPV6 as Preferred Dual-Stack Service
And if I’ve IPv6 as preferred IP in order list:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
So, this is what’s up with AWS cloud-controller-manager in Kubernetes dual-stack.