Securing Windows Workloads

Containers are a great way to package applications, with minimal libraries required. It guarantees that you will have the same deployment experience, regardless of where the containers are deployed.

Container orchestration software pushes this further by preparing the necessary foundation to create containers at scale.
Linux and Windows support containerized applications and can participate in a container orchestration solution.
There are an incredible number of guides and how-to articles on Linux containers and container orchestration, but these resources get scarce when it comes to Windows, which can discourage companies from running Windows workloads.
This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to a hybrid environment.
Windows Containers
A container is similar to a lightweight packaging technique. Each container packages an application in an isolated environment that shares its kernel with the underlying host, making it bound by the limits of the host operating system. These days, everyone is familiar with Linux containers, a popular way to run Linux-based binary files in an isolated environment.
However, Windows also offers a container solution that allows users to package Windows-based applications in an isolated environment. Depending on your application’s framework and API calls, you can choose from several base images that Microsoft provides to create a Windows container. These base images range from full implementation of Windows APIs and services, to a minimal version with a small footprint. It is worth noting that the build number of these base images must match your host Windows build number to run them on your operating system.
Container Orchestration
After creating a container image, you will need a container orchestrator to deploy it at scale. Kubernetes is a modular container orchestration software that will manage the mundane parts of running such workloads.
To make this post more interesting, I will share all the commands required to set up a hybrid Kubernetes cluster in Azure. You can open up your Cloud Shell window from the Azure web portal and run the commands if you want to follow along.
If you don’t have an Azure account with a paid subscription, don’t worry. You can sign up for a free Azure account to complete the following steps.
Resource Group
To run a Kubernetes cluster in Azure, you must create multiple resources that share the same life span and assign them to a resource group. A resource group is a way to group related resources in Azure for easier management and accessibility. Keep in mind that each resource group must have a unique name.
The following command creates a resource group named calico-win-container
in the australiaeast location. Feel free to adjust the location to a different zone.
az group create --name calico-win-container --location australiaeast
Calico for Windows
Calico for Windows is officially integrated into the Azure platform, Calico for Windows is officially integrated into the Azure platform, so every time you add a Windows node, it will come with a preinstalled version of Calico. To check this, use the following command to ensure EnableAKSWindowsCalico
is in a Registered state:
Expected output:
If your query returns a Not Registered state, use the following command to enable AKS Calico integration for your account:
After EnableAKSWindowsCalico
becomes registered, you can use the following command to add the Calico integration to your subscription:
az provider register --namespace Microsoft.ContainerService
Cluster Deployment
Note: Azure free accounts cannot create any resources in busy locations. Feel free to adjust your location if you face this problem.
A Linux control plane is necessary to run the Kubernetes system workloads, and Windows nodes can only join a cluster as participating worker nodes.
Windows Node Pool
Now that we have a running control plane, it is time to add a Windows node pool to our AKS cluster.
Note: Use windows
as the value for the --os-type
argument.
Exporting the Cluster Key
Kubernetes implements an API server that provides a REST interface to maintain and manage cluster resources. Usually, to authenticate with the API server, you must present a certificate, username and password. The Azure command-line interface (Azure CLI) can export these cluster credentials for an Azure Kubernetes Service (AKS) deployment.
Use the following command to export the credentials:
az aks get-credentials --resource-group calico-win-container --name CalicoAKSCluster --admin
After exporting the credential file, we can use the kubectl binary to manage and maintain cluster resources. For example, we can check which operating system is running on our nodes by using the OS labels.
kubectl get nodes -L kubernetes.io/os
You should see a similar result to:
Windows Workloads
If you recall, Kubernetes API Server is the interface that we can use to manage or maintain our workloads.
We can use the same syntax to create a deployment, pod, service or Kubernetes resource for our new Windows nodes. For example, we can use the same OS selector that we previously used for our deployments to ensure Windows and Linux workloads are deployed to their respective nodes:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/00_deployment.yaml
Since our workload is a web server created by Microsoft’s .NET technology, the deployment YAML file also packages a service load balancer to expose the HTTP port to the Internet.
Use the following command to verify that the load balancer successfully acquired an external IP address:
kubectl get svc win-container-service -n win-web-demo
You should see a similar result.
Use the “EXTERNAL-IP” value in a browser, and you should see a page with the following message:
Perfect! Our pod can communicate with the Internet.
Securing Windows Workloads
The default security behavior for the Kubernetes NetworkPolicy resource permits all traffic. While this is a great way to set up a lab environment in a real-world scenario, it can severely impact your cluster’s security.
First, use the following manifest to enable the API server:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/01_apiserver.yaml
Use the following command to get the API server deployment status:
kubectl get tigerastatus
You should see a similar result to:
Calico offers two security policy resources that can cover every corner of your cluster. We will implement a global policy, since it can restrict internet addresses without the daunting procedure of explicitly writing every IP/CIDR in a policy.
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/02_default-deny.yaml
If you go back to your browser and click the Try again button, you will see that the container is isolated and cannot initiate communication to the internet.
Note: Source code for the workload is available here.
Clean Up
If you have been following this blog post and did the lab section in Azure, please make sure that you delete the resources, as cloud providers will charge you based on usage.
Use the following command to delete the resource group:
az group delete -g calico-win-container
Conclusion
This post lists many reasons for running a containerized environment. If you feel like offering services at scale or an agile environment is your cup of tea, I recommend taking a look at Tigera’s certification courses.
Calico courses are self-paced, step-by-step tutorials that prepare you to build containerized environments on different cloud platforms or local test environments. On top of that, you will learn about Calico integrations and security measures that will allow you to build a secure environment from start to finish.
Ready to become an Azure expert? Enroll in our Calico Azure course now.