Kubernetes 101: Deploy Your First Application with MicroK8s

Kubernetes is challenging. Of that, there is no debate. Not only are there a lot of moving parts that go into deploying a container to a Kubernetes cluster, but so much can go wrong along the way. To complicate matters even further, deploying the Kubernetes cluster can be a hair-pulling affair.
That’s why tools like Canonical’s MicroK8s have been developed. With such software, the process of deploying a Kubernetes cluster is significantly less challenging, so you can focus more on getting up to speed with deploying applications and services to the cluster.
One of the many things that makes deploying applications and services to a Kubernetes cluster is accessing them. Unlike, say, Docker, when you deploy an application or service to a Kubernetes cluster, it’s not automatically available to your network. If you’re on a machine that’s a part of the cluster, you can certainly access that app or service, because that machine will have access to the subnet used by the cluster. Without a bit of extra trickery, that application and service is simply not available beyond the cluster. That means you have to manually make it available.
Again… a lot of moving parts.
I’ve already gone through the steps for installing Microk8s on Rocky Linux and then joining nodes to the controller. By adding the nodes, you create a cluster that your applications and services can be deployed to.
What I’m going to do now is show you how to deploy your first application to the cluster and then make that application accessible outside of the cluster. One thing to note is that deploying an application/service to a MicroK8s cluster is similar to that of any other Kubernetes distribution. The biggest difference to users is that the deployment requires the MicroK8s command, whereas other Kubernetes distributions do no have that requirement.
What You’ll Need
The only thing you’ll need for this is a running MicroK8s cluster of at least one controller and 2 nodes. These can be deployed to your data center, your test network, or a third-party cloud host. As long as the nodes are connected to the controller, you’re good to go.
If you’re unsure if the nodes are connected, log in to your controller and issue the command:
1 |
microk8s kubectl get nodes |
You should see the controller and all of your attached nodes listed. If not, make sure to go through the steps to connect your nodes again.
How to Deploy Your First Application with MicroK8s
Log in to the MicroK8s controller. For this demonstration, we’ll deploy an NGINX web server application. We’ll name this deployment nginx-webserver and use the official NGINX container image for the deployment. The command for this is:
1 |
microk8s kubectl create deployment nginx-webserver --image=nginx |
The output from that command should look like this:
1 |
deployment.apps/nginx-webserver created |
Verify the deployment was successful with the command:
1 |
microk8s kubectl get pods |
You should see something like this in the output:
1 |
nginx-webserver-67f557b648-4mfc6 1/1 Running 0 9m59s |
Congratulations, your first pod has been deployed to the cluster. What is a pod? A Kubernetes pod is a collection of one or more containers and is the smallest unit of an application. Pods are generally composed of multiple, integrated containers but can also consist of a single container. In our instance above, we deployed a pod with a single container (NGINX).
At this point, the NGINX application is running but isn’t accessible. In order to make it accessible, we also have to deploy a service. What we’ll do here is expose our nginx-webserver deployment, using the type “NodePort” on port 80. What is NodePort? Simply put, a NodePort is an open port on every node connected to your cluster. Kubernetes routes incoming traffic on the NodePort to your deployed service or application.
To deploy the service, the command will look like this:
1 |
microk8s kubectl expose deployment nginx-webserver --type="NodePort" --port 80 |
The output of the command should look like this:
1 |
service/webserver exposed |
If you attempt to access the running container on port 80, such as http://192.168.1.45, you’ll find it inaccessible. What gives? Well, Kubernetes maps internal port 80 to a random internal port. Before we can access the running web server, we have to find out what port it has been mapped to. For that, issue the command:
1 |
microk8s kubectl get svc nginx-webserver |
The output should look something like this:
1 |
nginx-webserver NodePort 10.152.183.105 <none> 80:31508/TCP 3m11s |
As you can see, Kubernetes has mapped internal port 80 (the one being used by NGINX) to external port 31508 (the one being used by the Kubernetes cluster). So, if you point your web browser to 192.168.1.45:31508, you should see the NGINX welcome screen in your browser.
Of course, you would use the IP address of any one of your nodes. If, however, you use the IP address of your controller, the NGINX site won’t appear. Why? Because the controller deploys the container to the nodes, not to itself.
And that is all there is to deploy your first application to a Kubernetes cluster using MicroK8s.