

For cloud native computing, networking is an essential component, a stack of communications software that allows microservices to communicate with one another and with the world at large. The dynamic nature of container-based workloads puts new pressure on the networking layers of this stack, demanding extremely low-latency as well as rapid lookup times to find services.
Those operating Kubernetes workloads can look to an emerging class of software called a service mesh, which takes care of a lot of the issues around service discovery, authentication, and observability. Thus far, Istio — created by Google and IBM in partnership with the Envoy team from Lyft — has proved to be the service mesh most talked about, though Linkerd and even enhanced API Gateways such as Kong or NGINX are being pressed into service mesh duties as well.
The communication protocols themselves must be more nimble too, given the large amount of traffic going back and forth across a microservices architecture. To this end, Google has devised another technology, called gRPC, specifically for low-latency communications. On for connects on the Internet of Things, IBM’s MQTT is proving to be a robust protocol for low-bandwidth devices.
Finally, there are challenges around managing containers themselves. A kernel technology, they must share the Internet Protocol number with their host operating systems. As a result, managing containers must be done either through an overlay network or some other modification.