SD-WAN ( software-defined networking in a wide area network) and Kubernetes are two major technological developments of interest for businesses on the journey toward digital transformation. SD-WAN extends the SDN feature programmable network and automation to the WAN networks. And Kubernetes has largely adopted a containerized application orchestrator that has solid API architecture, autoscaling, deep monitoring, and load balancing capabilities for dynamic and distributed infrastructures.
Many companies are using them together, given that business applications are distributed to different data centers and edge cloud locations. Here, different Kubernetes clusters are connected to end-user applications and workloads, and SD-WAN is used to connect all the clusters and end users.
But there are still gaps in this amalgamated solution. SD-WAN is used mostly on the public internet, which has different performances in different parts of the world. When we deploy microservice-based applications there may be cases where some microservices may have specific latency requirements or need more bandwidth, and so on. In such situations, addressing the needs of such microservices might be cumbersome for Network Operations (NetOps) teams. And for any specific network requirement by microservices, DevOps teams should provide the network requirements to the NetOps team. This process involves manual configuration that further results in chaos in terms of loss of time and may probably lead to misconfiguration as it involves human intervention.
To address this gap, the Cisco team has introduced an open-source project, Cloud Native SD-WAN (CN-WAN), to improve integration between SD-WAN and Kubernetes. CN-WAN takes advantage of the key features of both SD-WAN and Kubernetes. SD-WAN solutions have become more advanced with strong intelligence policing through the help of APIs. Kubernetes also added features like declarative deployment through the operator framework, which reconciles the current state of the Kubernetes system with the desired state, the state set in the configuration.
CN-WAN’s components work together to pull the networking needs from microservices deployed in the Kubernetes cluster and push them to the SD-WAN software to render them as network policies. This way CN-WAN helps microservices-based applications to deliver optimal performance over the SD-WAN on the fly.
On the right-hand side in the above image, you can see microservices that handle services like voice, video, slides and chat. As you can imagine these microservices applications might have different bandwidth and latency requirements over SD-WAN.
CN-WAN project has three components: the operator, reader, and adapter. These get embedded into Kubernetes and the SD-WAN controllers.
A CN-WAN operator runs in the Kubernetes cluster to monitor the microservices. Each microservice holds specific CN-WAN annotations that denote the network requirements of the service and describes how the SD-WAN should optimize the network traffic for the specific microservice. These annotations and the rest of the configuration information are registered with Service Registry. A CN-WAN reader pulls the service-specific configuration information and annotations from Service Registry and pushes it to a CN-WAN Adaptor. A CN-WAN adaptor is specific to each SD-WAN solution that renders and translated the annotations and configuration information to SD-WAN policies that are going to enforce into the network.
The key thing is that SD-WAN or any network needs to be aware of the varied requirements of applications and their related components, in order to avoid manual tasks that hamper service delivery.
The code is the CN-WAN project is available at github. To get more technical insights along with the example, check out the Cloud Native SD-WAN session at the upcoming KubeCon+CloudNativeCon North America, to be presented by Cisco’s Alberto Rodriguez-Natal and Google’s Mark Church.
KubeCon and CloudNativeCon is a sponsor of The New Stack.