TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Containers / Kubernetes / Linux

Ubuntu Server Struggles with Post-Docker Kubernetes Installs

Kubernetes is already a challenging platform to use. To make it just as hard to get up and running, without fail, sends me right back to the simplicity of Docker Swarm.
Aug 6th, 2022 9:00am by
Featued image for: Ubuntu Server Struggles with Post-Docker Kubernetes Installs
Image by Monique Stokman from Pixabay.

I have a bone to pick with someone. I honestly don’t know who to point this ire toward, but there’s a big problem now with using Ubuntu Server as a base for Kubernetes.

Over the past few days, I’ve attempted, over and over, to get Kubernetes up and running on Ubuntu Server 22.04, and, no matter how many times I’ve attempted, it fails. Now, I can get Kubernetes installed on Ubuntu Server without a problem, as I’ve done so many times before. The only difference is now, instead of using Docker, I have to use a runtime like containerd. However, when attempting to initialize the cluster, I am (every time) met with the error:


It doesn’t matter if I’m coming from a fresh install or having done a sudo kubeadm reset, it times out and never initializes. I’ve attempted this three times (each with new instances of Ubuntu Server 22.04) and it never succeeds.

The issue sent me down a rabbit hole which offered some promise that the latest version of containerd had problems when installing on Ubuntu Server, but even after attempting a new deployment with an older version of containerd, I wound up with the same problem.

Suffice it to say, I’ve come away from this little experiment frustrated. A Kubernetes cluster on Ubuntu Server 22.04 should be a no-brainer. It’s not. Although I can get a single instance running just fine and even deploy an app with it. But the second I want to go the cluster route, things go seriously awry.

Drilldown

What’s interesting about that error is the kubelet is running. However, when running:

sudo systemctl status kubelet

I see the errors like this:

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.613305  550949 kubelet.go:2424] "Error getting node" err="node \"kubecontroller\" not found"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.714099  550949 kubelet.go:2424] "Error getting node" err="node \"kubecontroller\" not found"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.814923  550949 kubelet.go:2424] "Error getting node" err="node \"kubecontroller\" not found"

The next rabbit hole had to do with the ~./kube/config file. Even after re-checking the permissions on that file, I had issues. Fixed the issues and restart the kubelet with:

sudo systemctl restart kubelet

Guess what? Now, kubelet won’t start.

Let the hair pulling commence!

A quick reboot of the system to see if it might flush out whatever nastiness remains. Once the machine rebooted, I ran the init command so I can view more troubleshooting information like so:

sudo kubeadm init

Guess what? New errors such as:

error execution phase wait-control-plane

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1    cmd/kubeadm/app/cmd/phases/workflow/runner.go:235

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207

k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1    cmd/kubeadm/app/cmd/init.go:153

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute    vendor/github.com/spf13/cobra/command.go:856

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC    vendor/github.com/spf13/cobra/command.go:974

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute    vendor/github.com/spf13/cobra/command.go:902

k8s.io/kubernetes/cmd/kubeadm/app.Run    cmd/kubeadm/app/kubeadm.go:50

main.main    cmd/kubeadm/kubeadm.go:25

runtime.main    /usr/local/go/src/runtime/proc.go:250

runtime.goexit    /usr/local/go/src/runtime/asm_amd64.s:1571

Obviously, that’s zero help. And no matter how much time I spend with my friend Google, I can’t find an answer to that which ails me.

Back to the drawing board. Another installation and the same results. And, of course, the official Kubernetes documentation is absolutely zero help.

What’s the conclusion to be drawn and what can you do (when Ubuntu Server is your go-to)?

It’s All about Docker

Once upon a time, deploying a Kubernetes cluster on Ubuntu Server was incredibly simple and rarely (if ever) failed. What’d the difference?

In a word… Docker.

The second Kubernetes removed Docker support, deploying a cluster on Ubuntu Server became an absolute nightmare. With that in mind, what can you do? Well, you can always install microk8s via snap with the command:

sudo snap install microk8s --classic --channel=1.24

Of course, as everyone knows, the snap installation can take some time and it’s not nearly as responsive as a standard installation. However… when in Rome.

Once the installation completes, add your user to the necessary group with:

sudo usermod -a -G microk8s $USER

Change the permissions of the .kube directory with:

sudo chown -f -R $USER ~/.kube

Log out and log back in and then run the command:

microk8s status --wait-ready

Big bada boom, all is working. I can deploy an NGINX app with:

microk8s kubectl create deployment nginx --image=nginx

But this isn’t a cluster. Ah, but microk8s has you covered. On the controller node, issue the command:

microk8s add-node

You will be given a join command to run on any other machine you’ve installed microk8s on that looks like this:

microk8s join 192.168.1.43:25000/bad12d3d8966b646442087d6a1edde436/6407f3e21772

Oh, but guess that that’ll error out on you as well with something like:

Contacting cluster at 192.168.1.43

Connection failed. Invalid token (500).

Rebooted both machines, rerun the add-node command again with the –skip-verify command, and no dice.

Make sure to set hostnames for both machines, that those hostnames are mapped in /etc/hosts, and double-check that the time is correct on both machines. No deal.

However, after the second reboot of both machines, for whatever reason, the node was able to join the controller. The process took far longer than it should have, but I associate that with using the snap version of the service.

After running microk8s kubectl get nodes, I can now see both nodes on my cluster.

Huzzah.

Why Make It Difficult?

To those involved… this shouldn’t be so hard. Seriously. Kubernetes is already a challenging platform to use. To make it just as hard to get up and running, without fail, sends me right back to the simplicity of Docker Swarm.

Sure, I could migrate to an RHEL-based server for my Kubernetes deployments, but Ubuntu Server has been my go-to for years. Don’t get me wrong, I don’t mind microk8s, but it won’t work with the likes of Portainer (which is my go-to for these sorts of things). For that, I have to add the Portainer addon with:

microk8s enable community

microk8s enable portainer

Outstanding…only not. What’s the problem now? After enabling Portainer, you are informed to access it via the nodeport, which you can get the address using the command:

export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" services portainer)

But wait… kubectl isn’t installed because I’m using microk8s! I have to modify the command like so:

export NODE_PORT=$(microk8s kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" services portainer)

Then you must run the commands (again, modifying the next to include microk8s):

export NODE_IP=$(microk8s kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")  echo https://$NODE_IP:$NODE_PORT

The final command will report the address used to access Portainer.

Wouldn’t it be great if that worked? It didn’t.

Guess what? All of this was from the official documentation (minus the addition of the microk8s portion of the command which was conveniently left out).

To those responsible for these bits of technology, this shouldn’t be so hard. I realize there might be bad blood between Kubernetes and Canonical, but when that spills out to the userspace, the real frustration falls on the heads of admins and developers.

Please, please, please, fix these problems so those who prefer Ubuntu Server can get their Kubernetes on with the same ease they once could.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.