My Further Adventures (and More Success) with Rancher
When last we met, I offered my first-run experience with the SUSE Rancher Kubernetes distribution, which did not go well. On top of that, my reporting of those first steps did not sit well with the Rancher community. I get that and respect all of the input I received. Because of that, I decided to give Rancher another go and report on my latest findings.
Suffice it to say, while it was still a mixed bag, I did have more success with the platform.
It’s All about Minimum Requirements
The first person to reach out to me knows a thing or two about a thing or two and made a specific point to say that in no way could I get the latest version of Rancher to spin up properly when using the minimum suggested system requirements. In fact, this person (who shall remain anonymous) informed me that 8GB of memory is the bare minimum to use when deploying Rancher.
With that bit of information in hand, I spun up a new virtual machine with 8GB of RAM and 4 cores. After installing Docker in my usual fashion, I deployed the Rancher container and hoped for the best.
To my surprise, it worked. I now had a running Rancher container and could take the next steps.
It’s All about the Right Versions
This all started at around 9:30 PM last night. I then copied the Docker command and ran it on a MicroK8s cluster I’d had up and running for some time. This cluster was running perfectly, so I had every confidence Rancher would be able to successfully provision and I’d be working with the tool without a problem.
But there’s this guy named Murphy who has a law…
I went about writing some fiction, checking in on Rancher every now and then, only to see that (by 11:00 PM) Rancher had still yet to provision the MicroK8s cluster. When I woke up this morning, one of the first things I checked was the status of the provisioning and, after nearly 12 hours, the provisioning still had yet to succeed.
Something was wrong.
One person who commented on the original article works at SUSE. In one of his comments, the engineer stated, “There’s a potential version issue between MicroK8s and the Rancher-supported versions.”
Apparently, the latest version of MicroK8s won’t work with Rancher. To get around that, I would need to install MicroK8s version 1.25, which is done with the command:
sudo snap install microk8s --channel=1.25/stable --classic
If you already have an unsupported version of MicroK8s installed, remove it with:
sudo snap remove microk8s
Installing a specific version of MicroK8s isn’t mentioned in the documentation.
Nonetheless, with a supported version of MicroK8s installed on a three-node cluster, it was then time to see if Rancher could provision this time. Here’s how that’s done.
Log into Rancher with the admin credentials you set when you first logged into the service. Once logged in click the Create button for clusters (Figure 1).
This time, I added the new cluster with the supporting version of MicroK8s using the command displayed by Rancher (Figure 2) and crossed my fingers it would successfully provision.
When the deployment command was completed, I clicked done and waited for the provisioning response. To my surprise, after creating the new cluster, I wound up seeing the following error on the local machine:
Failed to get GlobalRoleBinding for 'globaladmin-user-hbgvw': %!w(<nil>)
A quick Google search clued me in that this is a known bug that shouldn’t affect anything.
At this point, it’s a matter of waiting to see if the cluster successfully provisions. It’s been over 20 minutes so far and nothing (Figure 3).
I decided to delete the failed cluster and hope that the new cluster would be able to provision. After 40 minutes the cluster had still yet to provision.
The Local Cluster
Fortunately, this time around I could at least play with the local cluster, which does actually make it very easy to deploy applications. I am guessing, however, that the local cluster is not to be used for production deployments.
Even so, it was at least a means for me to see just how powerful and easy-to-use Rancher actually is. To deploy an app, select your local cluster and then click Apps > Charts. Of course, if this were a production environment, you’d want to make sure to select a provisioned cluster (instead of Local). In the resulting window (Figure 4), you can select from quite a good number of apps that can be installed thanks to Helm.
Select the app you want to install and, on the resulting page, click Install (Figure 5).
You will then be greeted by an installation wizard that allows you to configure the deployment of the app. Which app you choose will determine the steps in the wizard.
With Cassandra, all I had to do was configure the namespace and then I was presented with the YAML file for further customization (if needed). Click Install again and Rancher will do its thing.
After the app was installed, I could check the dashboard and see that it was successfully up and running (Figure 6).
After playing around with the Local Cluster, I was finally able to see the value in Rancher. The Helm integration is fantastic, making it incredibly easy to install from a large number of apps and services.
Now that I’ve seen what Rancher can do, I must say I’m seriously impressed. Even though I was never able to test the MicroK8s cluster, I was able to see just how powerful this platform truly is.
I will say, however, that although Rancher does make managing Kubernetes considerably easier (when compared to the CLI), getting it up and running is not nearly as simple as managing your deployments. I’m absolutely certain the problem with provisioning my cluster is on me but after following all of the advice I’ve been given and still seeing
Rancher failed to provision my MicroK8s cluster, I still am convinced the deployment of Rancher could be made easier. But the truth is, once you get past the deployment of the system, Rancher does, in fact, make Kubernetes simple.