Modal Title
Cloud Native Ecosystem / Kubernetes

Kubernetes for Windows

You can orchestrate Windows container workloads alongside Linux or work with Kubernetes from the Windows desktop.
Dec 27th, 2022 6:09am by
Featued image for: Kubernetes for Windows

Although most people think of Kubernetes and containers generally as Linux technology, Linux is not the only OS where you can use containers. Once you start running multiple containers and microservices on one or more hosts, you will need the kind of features that a container orchestrator like Kubernetes provides, such as load balancing, high availability, container scheduling, resource management, etc. Although the Kubernetes control plane currently only runs on Linux, you can still run Windows containers on Kubernetes.

Windows on Kubernetes

Windows Server 2016 introduced containers (using job objects and silo kernel objects, whereas Linux uses control groups and namespaces). Work on Windows support for Kubernetes started in 2016, with the stable release shipping in Kubernetes 1.14 in 2019. The goal wasn’t to move the entire control plane to Windows but to offer Windows Server as a compute node for Kubernetes, giving organizations an environment that would let them run all their apps in the same place.

Think of it less as bringing Kubernetes to Windows and more as bringing Windows, .NET, IIS and other Windows programming frameworks to Kubernetes so that Windows developers can use cloud native tools to build and deploy distributed apps while reducing the costs of supporting existing apps and streamlining migration off older versions of Windows as they lose support.

Now you can manage Windows and Linux containers side by side in the same Kubernetes cluster by adding Windows Server worker nodes that can run Windows containers to that cluster: They just have to be running Windows Server 2019 or later (and you need to use a CNI that’s compatible with both Windows and Linux, like Calico or flannel).

For instance, Microsoft runs many of the services that power Office 365 and Microsoft 365 in Windows containers on Azure Kubernetes Service.

Clusters with Windows support will be a mix of Windows and Linux nodes, even if the Linux node is only used for leadership roles like the API server and scheduler. But you can also deploy a Linux container running a reverse proxy or Redis cache and an IIS application in a Windows container in the same cluster or even as part of the same app, and use the same pipelines for deployment and the same tools for monitoring all the different pieces of the app.

This makes Windows containers a good way to modernize applications: You can start by “lifting and shifting” an app into a container, and then add more cloud native features at your convenience.

Windows Containers in Kubernetes

Supporting Windows containers on Kubernetes doesn’t make Windows work like Linux; admins will still be using familiar Windows concepts like ACLs, SIDs, and usernames rather than Linux-style object permissions, userIDs and groupIDs, and they can use \ in file paths the way they’re used to doing.

Linux features like hugepage aren’t available in Windows containers because they’re not a feature in Windows, and you can’t make the root file system read only in the way it does in Linux containers because Windows registry and system processes need write access.

There are also a number of features where you need to use a slightly different option on Windows, like runAsUserName rather than runAsUser, to pick what user a container runs as or to restrict the container administrator account rather than the root user.

Windows Server containers have two user accounts by default (neither of which are visible to the container host): container user and container administrator. Container user is for running workloads that don’t need extra privileges, and it’s definitely the best choice if you’re deploying containers in a multitenant environment: Container administrator lets you install user mode services and software that persist (like IIS), create new user accounts and main configuration changes to the container OS.

You can also create user accounts with the specific permissions you need. Although you can specify file permissions for volumes for Linux containers, they are not currently applied for Windows containers, but there’s a proposal to use Windows ACLs to support that in the next version of Kubernetes.

Generally, identity is one of the places where Kubernetes for Windows is most different because it needs to support Active Directory to give applications access to resources. A Windows app talking to an external database server or file share will likely use a Windows identity for authorization and won’t get access without that AD account. But containers can’t be domain joined.

Instead, workloads can use Group Managed Server Accounts (GMSA), which assigns an AD identity to the container that handles password management, service principle name management and delegation to other administrators in a way that can be orchestrated across the cluster. If a node fails and the workload gets migrated to another node in the cluster, as long as all the Windows hosts in the cluster where the pod might land are domain joined, the identity goes with it.

You can use Kubernetes with Azure Active Directory through Azure AD workload identity for Kubernetes (for both Windows and Linux container workloads). This enables Azure AD workload identity federation, so you can access resources protected by Azure AD — everything from your own Microsoft 365 tenant resources to Azure services like Azure Key Vault — without needing secrets.

Storage and Networking for Windows Containers

Other areas of Kubernetes on Windows are increasingly becoming similar to the way they work on Linux.

Early on, Windows containers on Kubernetes could access a limited range of storage types: Support for Container Storage Interface plugins (introduced in Kubernetes 1.16 and stable since 1.22) through CSI Proxy for Windows means Windows nodes can work with a wide range of storage volume systems by using existing CSI plugins.

Similarly, Windows Kubernetes networking has moved from relying on the Host Network Service to using overlay networking to support CNI plugins, kube-proxy and network control planes like Flannel. The updated kube-proxy Next Gen is also being ported to Windows.

If you’re running into problems, this is an excellent list of troubleshooting tips for Kubernetes networking on Windows, and these tips should help you find out what’s a problem with Kubernetes or with Windows.

Understanding Isolation in Kubernetes for Windows

Windows containers can use the traditional Linux container isolation model, known in Windows Server as process isolation, where containers share the same kernel with each other (and the host), or they can use Hyper-V isolation, where each container runs in a lightweight VM, giving it its own kernel. That’s similar to the improved isolation Kata containers offer on Linux.

With process isolation, containers share the kernel with other containers on the same host as well as with the host itself, which means the kernel versions of the container image have to match.

Unless you’re using Windows Server 2022 or Windows 11 on current versions of Kubernetes, where you can carry on using an existing Windows Server 2022 or Windows 11 container image even if you update the container host, Windows containers need the OS version of the host and container image to match down to the build number (which changes in each new version of Windows). For Windows Server 2016 and older versions of Kubernetes, the match needs to be even closer: down to the revision number, which changes when you apply Windows updates.

Hyper-V containers would avoid that problem because the kernel would no longer be shared with other containers or even the container host; instead, Hyper-V would load whatever kernel a container needs, giving you backward compatibility so you can move Windows nodes to a new version of the OS without rebuilding your container images and updating apps to use them.

However, Hyper-V containers aren’t currently supported in Kubernetes: There was alpha support using the Docker runtime and Hyper-V’s Host Compute Service in earlier versions of Kubernetes, but it only supported one container per pod. That’s been deprecated, and the work to enable Hyper-V containers with the containerd runtime and v2 of the Host Compute Service is proceeding slowly.

But another long-awaited container option is now available. Although Windows containers don’t support privileged containers, you can get similar functionality with the new HostProcess containers. HostProcess containers have access to everything on the container host, as if they were running directly on it. You don’t want to use them for most workloads, but they are useful for administration, security and monitoring — including managing Kubernetes itself with tasks like deploying network and storage plugins or kube-proxy.

A HostProcess container can access files and install drivers or system services on the host. That’s not a way to deploy server workloads, but it gives you one place where you can run cluster management operations. That means you can reduce the privileges needed for other Windows nodes and containers. Networking and storage components, or tasks like log collection and installing security patches, or certificates can now run in (extremely small) containers that run automatically after you spin up a new node, rather than having to log in manually and run them directly as Windows services on the node.

Stable release for HostProcess containers is on track for Kubernetes 1.26, which is being released in December 2022.

Scheduling Windows Containers in Kubernetes

You can deploy Windows nodes to a cluster with kubeadm or the cluster API, and the kubectl commands for creating and deploying services and workloads work the same way for Linux and Windows containers. But you need to do some explicit infrastructure planning (and remember, you’ll be deploying those nodes by interacting with the Kubernetes control plane, running on Linux).

You can’t mix and match container types on a single pod: A pod can run either Windows or Linux containers. Windows nodes can only run Windows containers, and Linux nodes can only run Linux containers, so you need to use node selectors to pick what operating system deployment will run on.

The IdentifyPodOS feature gate that adds an OS field to the pod spec defaults to enabled in Kubernetes 1.25, so you can use that to mark which pods run Windows Server, allowing the kubelet to reject pods that shouldn’t run in the node because they have the wrong OS — but it’s not yet used for scheduling. It’s still worth using because it gives you much clearer error messages if a Windows container fails because it ends up on a Linux node (or a Linux container fails because it ends up on a Windows node).

If you’re adding Windows nodes to a cluster where you already have Linux workloads deployed, you will want to set taints on the Windows nodes so that if a Linux node fails over, those applications won’t end up on a Windows node (or vice versa). You can also use taints to mark every Windows node with the OS build it runs (because while you can run multiple Windows Server versions in a cluster, the Windows Server version on the pod and node need to match). You can simplify that by using the RuntimeClass variable to encapsulate the taints and tolerations that define the build of Windows that you need.

If you’re using Helm charts for deployment, check that they cope with heterogeneous clusters or add taints and tolerations to steer containers to the right nodes.

Another thing to consider when adding Windows nodes to a cluster is increasing the resources you specify in the template. While they don’t need significant amounts of memory when in active use because read-only memory pages are shared between multiple containers, Windows Server containers tend to need more memory to start up successfully — and the startup time for the first container may be longer. Containers will crash if applications need to call more memory than they have access to, and the Windows background services running in containers means the memory allocation likely needs to be larger than for a Linux container. If your templates were specified for Linux containers, increasing the memory allocation will avoid issues for Windows containers — while still giving you much higher density than you would get with virtual machines.

Resource management is slightly different for Windows nodes too. CPU and memory requests in pod container specifications can help avoid overprovisioning a node, but they won’t guarantee resources if a node is already overprovisioned.

The metrics for operations like pod scaling are the same as for Linux. Node Problem Detector can monitor Windows nodes, although it’s not yet using Windows Management Instrumentation (WMI), so only a few metrics are included. Use Microsoft’s open source LogMonitor tool to pull metrics from the Windows log locations like ETW, Event Log, and custom log files that Windows apps typically use.

Which Windows Server Versions Are Best for Kubernetes

Because Windows Server versions have end-of-support dates, you may need to think about upgrading the OS version of containers, which is easier than with a virtual machine: You can just edit the dockerfile for the container (and upgrade the node so that it matches), although that doesn’t help with any changes you might need to make to an app when the version of Windows changes.

Windows Server 2016 used to be supported by Kubernetes, but that didn’t allow multiple containers per pod. Windows Server 2019 made significant changes to overlay networking that enabled that, adding support for CNI networking plugins like Calico, so you now need to use Windows Server 2019 or later for Kubernetes pods, nodes and containers. But different Windows builds are supported depending on which version of Kubernetes you’re running.

This was a little more complicated when Windows Server had more frequent Semi-Annual Channel (SAC) releases, but Microsoft is now suggesting that organizations that want to upgrade versions of Windows Server more quickly to get improvements in container support move to Azure Stack HCI and use Azure Kubernetes Service, so you only need to think about that if you’re already running existing SAC releases. With Kubernetes 1.25, Windows Server 2019, Windows Server 2022 and Windows Server 20H2 (the final SAC release) are supported.

Older SAC releases are supported on Kubernetes 1.17 to 1.19, but the point of using SAC releases was to take advantage of new features more quickly, so most organizations affected by this should be in a position to upgrade to Windows Server 2022. That has smaller base container images but also includes more container features like virtualized time zones for distributed apps, running apps that depend on Active Directory without domain-joining your container host using GMSA, IPv6 support for Windows containers and other networking improvements.

If you’re using GKE, you can’t create new containers using SAC images anymore.

Going forward, you will be able to run a Windows Server 2022 container image on all new versions of Windows 11 and Windows Server until the next Long-Term Servicing Channel release, so you can build a Windows container image now using the Windows Server base OS image, and it will run on releases up to and including Windows Server 2025 (or whatever Microsoft calls the next LTSC release). At that point, Microsoft will add a deprecation scheme, so the base OS image for Windows for that new release will run on the next LTSC after that.

That gives Microsoft more freedom to change the APIs between user and kernel mode as it needs to, while allowing users to run one container image for longer by using process isolation.

Kubernetes Runtimes on Windows

For many developers, Kubernetes is synonymous with Docker containers, but while the Docker runtime has been widely used containerd and has been supported as a Kubernetes runtime since 2018 and has been the interface for Windows containers since Kubernetes 1.18. Using containerd as the runtime will eventually allow Hyper-V isolated containers to run on Kubernetes, giving you a secure multitenant boundary across Windows containers: It is also required for node features like TerminationGracePeriod.

When Mirantis bought Docker Enterprise and renamed it the Mirantis Container Runtime, it also committed to maintaining the dockershim code with Docker: Windows containers using that runtime will still build and run in the say way, but support for them now comes from Mirantis rather than Microsoft. You can also use the Moby runtime for Windows containers.

Kubernetes on the Windows Desktop

If you’re running Kubernetes infrastructure, you likely don’t want the overhead of virtualization. But if you want to work with Kubernetes — for development or just to learn the API — on your Windows desktop, you can use Hyper-V and a Linux VM, or WSL and run a Linux distro directly to run Kubernetes.

If you’re using Docker for Windows, Rancher Desktop, or minikube and a recent build of Windows 10 or 11, they integrate with WSL 2, so you get better performance, better memory usage and integration with Windows (simplifying working with files). Kind and k3s will both run on WSL 2 or Hyper-V, but you may need some extra steps (and as Kind stands for Kubernetes in Docker, you’ll need that or Rancher Desktop anyway). You can install Docker on WSL 2 without Docker Desktop if you only want Linux containers.

Alternatively, if you’re getting started with Kubernetes on Windows and you want to quickly build your first Windows Kubernetes cluster to try things out — or to create a local development environment — the SIG Windows dev-tools repo has everything you need to create a two-node cluster from scratch, with your choice of production or leading edge Kubernetes versions. This uses Vagrant to create VMs with Hyper-V or VirtualBox, create and start the Kubernetes cluster, spin up a Windows node, join it to the cluster and set up a CNI-like Calico.

Is Kubernetes Right for All Windows Apps?

The .NET, ASP.NET and IIS applications many enterprises run can be containerized, as can applications that consume Windows APIs like DirectX (like a game server), but as always with containers, you need to think about the state. The rule of thumb is that you can containerize Windows apps on Kubernetes if critical data like the state is persisted out of the process and rebooting the app fixes common errors. If that would lose state, you’ll need to think about rewriting the app or adding extra jobs to the workload. If more than one process can work on shared data, Kubernetes should be a good way to scale the app.

Many IIS and ASP.NET apps have hardcoded configurations in web config. To migrate to Kubernetes, you’ll want to move application configuration and secrets out of the pods using environment variables (so the workload knows whether it’s running in a test or production environment, say) in both the web config file and the YAML file for the application. If you don’t want to rewrite the code, you can do that by calling a PowerShell script from the dockerfile for the container image.

There are tools built in to the Windows Admin Center to help you containerize existing Windows apps: It has the option to bring containers to Azure, but you can use it to create images and use them on any Kubernetes infrastructure.

Will the Kubernetes Control Plane Ever Come to Windows?

Running Kubernetes means either running your own Linux infrastructure or using a cloud Kubernetes service, but even enterprises with large numbers of Windows Server workloads increasingly have Linux expertise. Although there have been discussions about bringing the Kubernetes control plan to Windows (and even a few prototypes because most of the components needed to run nodes as leaders can be ported to run on Windows), the broader Kubernetes ecosystem for logging, monitoring and other operations tooling is based on Linux. Even with technologies like eBPF coming to Windows, replacing or migrating all of that to Windows would be a significant amount of work, especially when VMs and WSL can handle most scenarios.

But as Kubernetes is increasingly used at the edge, especially in IoT scenarios where resources are often severely constrained, the overhead of a Linux VM to work with Windows containers can be prohibitive. There are a lot of edge locations where IoT devices and containers collect and process data — an automated food kiosk in a shopping mall, a pop-up store at a festival or an unattended drill head on a small oil field — where Kubernetes would be useful but running a management server is challenging.

Brendan Burns, Kubernetes co-founder and corporate vice president at Microsoft, mentioned in a recent Azure event that while the team had expected that customers would deploy bigger and bigger clusters, instead “people were deploying lots and lots of small clusters.” IoT is likely to make that even more common.

Microsoft’s new AKS-lite Kubernetes distribution designed for edge infrastructure runs on IoT Enterprise, Enterprise, and Pro versions of Windows 10 or 11 on PC class hardware, with Kubernetes or k3s running in a Linux VM (the private preview initially ran only Windows containers using Windows IoT images, although Linux container support is available in the public preview).

The value of Kubernetes is the API it delivers more than the specific Linux implementation that delivers that, and the strong CNCF certification process means that the many Kubernetes distributions compete on the tools and enhancements they include and the choices they make about runtimes, networking and storage to suit particular scenarios, rather than on the fundamentals of Kubernetes. If a scenario like orchestrating IoT containers makes it useful, perhaps a future Windows Kubernetes distribution that doesn’t rely on Linux VMs will join the list.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Mirantis, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.