How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Native Ecosystem / Containers / Kubernetes / Microservices

Distributed Fabric: A New Architecture for Container-Based Applications

Jan 17th, 2017 3:00am by
Featued image for: Distributed Fabric: A New Architecture for Container-Based Applications

There’s a palpable sense of excitement in the application development world around container technology. Containers bring a new level of agility and speed to app development, giving developers the ability to break large monolithic apps into small, manageable microservices that can talk to one another, be more easily tested and deployed, and operate more efficiently as a full application. However, containers also demand a new architecture for the application services managing these microservices and apps, particularly in regards to service discovery — locating and consuming the services of those microservices.

Container Technology and the Microservices Revolution

Ranga Rajagopalan
Ranga is Chief Technology Officer and co-founder at Avi Networks and has been an architect and developer of several high-performance distributed operating systems, as well as networking and storage data center products. Before his current role as CTO, he was the Senior Director at Cisco's Data Center business unit, responsible for platform software on the Nexus 7000 product line. Ranga joined Cisco through the acquisition of Andiam , where he was one of the lead architects for the SAN-OS operating system. Ranga began his career at SGI as an IRIX kernel engineer for the Origin series of ccNUMA servers. He has a Master of Science degree in electrical engineering from Stanford University and a Bachelor of Engineering in EEE from BITS, Pilani, India.

Applications were traditionally built as one massive piece of technology, housed on single appliances and managed by the IT department. When new apps or features needed to be built or added, the cumbersome process of managing, configuring and re-architecting the load balancing, security, visibility and communications between clients bogged down the process, drastically delaying advancements and updates in application development.

Today, however, developers can break these monolithic applications into microservices via container-based applications. Apps that were once static and immovable are now divided into light, manageable, multiple parts, or containers, providing the building blocks of an application once combined. Containers can fundamentally impact and improve the speed of development and agility in deploying apps, and companies find that incredibly useful and intriguing. But this also poses new problems and challenges in managing service discovery for these wide-ranging apps.

Developers want to focus on developing their apps without having to account for underlying connectivity and network services concerns. Service discovery presents a networking challenge that can complicate and slow down the adoption of container technology. It is essential to have an architectural approach for proxy and application services that can provide organizations a flexible framework for network services to deploy apps built utilizing container technology.

Building the Architecture for Service Discovery

Previously, managing a single application hosted at a single location provided a relatively easy approach to service discovery and it was simple to locate or deploy new app features to a monolithic application. While the container approach simplifies development by breaking down the application into autonomous functional components, the explosion in the number of endpoints represented by tens or hundreds of containers and their ephemeral nature makes the discovery of services more complicated.

A Distributed Fabric can efficiently and affordably handle the service discovery for containerized apps.

Let’s take a customer visiting a shopping website and placing an item in her cart: As she reaches checkout, both the checkout and billing microservices, for example, must be found and accessed in order for the customer to complete the transaction. When they are distributed across servers, containers, or even locations, that can be quite difficult. Finding those dependencies is also critically important as updates are developed and deployed — such as introducing new payment services, for example — to the container cluster.

DNS combined with a load balancer is the most widely used service discovery mechanism today. When a new service is created or updated, a DNS A record maps the service name to a Virtual IP address. The actual Virtual IP address is hosted by a load balancer that accepts incoming connections and spreads out client load amongst application instances.

Sometimes the DNS A record may be mapped to multiple Virtual IP Addresses where each Virtual IP Address provides the service in different sites. Incoming client requests are directed to a site depending on client location or service availability or load or any such combination. Any changes to DNS such as a CNAME record or addition of more sites required a manual DNS configuration that typically took weeks to resolve.

Similarly, any changes to application instances required a manual load balancer configuration that took weeks to resolve. With these traditional approaches, it is time-consuming, error-prone and difficult to deploy, discover and add microservices or microservice instances.


As containers become more mainstream, developers and companies should be focused on creating innovative apps, not designing the plumbing for those apps.

Architecture Options for Service Discovery

What does a flexible architecture for service discovery look like for microservices applications? Let us explore some load balancer deployment options.

First, companies can choose a centralized load balancer or proxy. In this case, a traditional load balancer appliance sits at the edge of the network, receiving DNS or HTTP requests from clients, reaching out to the various containers and services to locate them, and then responding to those client requests.

While this was utilized traditionally in the days of monolithic applications, and worked quite well in a one-to-one environment, in today’s microservices environment with tens or hundreds of microservices across multiple containers and servers, tromboning traffic out and back in this way becomes extremely cumbersome and inefficient.

Second, companies can take a “side car” approach, where a load balancer is placed next to each container. This client-side load balancer takes a very granular approach to service discovery for efficiently locating services. However, in the case of microservices applications, multiple containers could be located across multiple servers (some servers can run up to 50 containers). Attempting to place a load balancer at each of these locations is incredibly cumbersome and can get very expensive, not to mention the challenge of managing and configuring each load balancer.

There is a new approach, however, that can efficiently and affordably handle the service discovery for these apps.

In this approach, called a distributed fabric, a single proxy runs on each node of a microservices cluster. Each proxy serves as a gateway to each interaction that occurs, both between containers and between servers. When a microservice or an external client attempts to access a target microservice, the proxy resolves the DNS lookup request that maps the target service name to its Virtual IP Address.

Subsequently, when the microservice or external client connects to the Virtual IP Address to access the target service, the proxy accepts the connection and spreads load across the instances of the target microservice. With one proxy per server, all of the transactions that must be performed in the containers of that single server go through that proxy in order to reach a service within that server or out on another server.

The central controller is also integrated with the container orchestration such as Mesos, the Docker Unified Control Plane, or Kubernetes. The figure below shows a high-level diagram describing this architecture in a Kubernetes environment across two data centers:


A flexible architecture for service discovery in a Kubernetes environment.

The controller orchestrates both distributed north-south as well as east-west load balancers. It queries the Kubernetes Master and gets the service port information and creates the “SRV” record in the DNS for the service port. It also allocates the virtual IP for the application and creates the A record in the DNS and pushes policies to the distributed load balancers.

The controller also ensures that new services added to the container cluster, are automatically discovered and new service proxy instances are placed on newly commissioned nodes in the cluster. The required DNS entries are automatically created dynamically or modified depending on the state of the cluster. The distributed fabric can also find existing service dependencies, including how the services must interact as new services are added.

Containers now empower companies and developers to quickly develop apps, test and deploy them, and update them quickly and efficiently. No longer static, the container-based approach presents new and interesting problems for service discovery that cannot be affordably or efficiently solved with the traditional application services architecture. Instead, network architects have found that a lightweight disaggregated data layer of proxies located alongside the container cluster can be useful and highly effective in managing the delivery of network services.

And with that new architecture in place, companies can get down to the business of creating truly innovative apps with container-based environments.

Docker and Mesosphere are sponsors of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.