Kubernetes Applications for Multicloud, Hybrid Cloud Environs

To stay ahead of the competition, organizations are constantly looking for ways to drive innovation with speed and agility, while maximizing operational and economic efficiency at the same time. To that end, they have been migrating their applications to multicloud and hybrid cloud environments for quite some time.
Initially, these applications were moved to the cloud using a “lift-and-shift” approach, retaining their original monolithic architecture. However, such monolithic applications are unable to fully exploit the benefits offered by cloud, such as elasticity and distributed computing, and are also difficult to maintain and scale.
Consequently, as the next evolutionary step, organizations have started to rearchitect their existing monolithic applications or develop new ones as containerized applications.
Deploying and managing containerized applications is, however, a complex task, and this is where Kubernetes comes in. Kubernetes (also known as K8s), the container orchestration tool originally developed by Google, has fast become the platform of choice for deploying containerized applications in public and private clouds
Using K8s, organizations have been able to achieve success in the initial deployment and management of these containerized applications in public and private clouds. They have, however, struggled with the subsequent steps, such as making the Kubernetes applications externally accessible to end users in a simple and automated manner, while still retaining control to ensure secure and reliable access to such applications.
The main reason for this is that legacy load balancers, which are used to front-end these applications, and make them accessible to end users, were designed with monolithic applications in mind, and hence are unable to keep pace with the agile manner in which these Kubernetes applications are deployed.
These load balancers were designed for a deployment process in which network resources for the applications are provisioned manually by network and security teams, a process that could take days if not weeks, and then manually configured on the load balancer. This process is obviously ill-suited for keeping pace with the deployment process of Kubernetes applications, thereby becoming a bottleneck in the overall deployment process.
Further compounding this problem is the fact that when deploying applications in multicloud and hybrid cloud environments, each public cloud provider has its own custom load balancer and management system.
For example, Amazon Web Services has its own Elastic Load Balancing solution, which is different from Microsoft’s Azure Load Balancer. This makes the task of automating application deployment even more complex and time-consuming. It also makes the task of applying a consistent set of policies across the different cloud environments error-prone as each load balancer has its own separate configuration and operation.
So, What’s the Solution?
To keep pace with the deployment of Kubernetes applications, one needs an access solution that enables the load balancer to dynamically manage such applications as they are deployed and scaled.
One way to achieve this is by deploying an ingress controller or connector agent that connects an external load balancer to the Kubernetes applications. Such a connector could monitor the lifecycle of these applications, and automatically update the load balancer with information to route traffic to them. This would greatly simplify and automate the process of configuring the external load balancer as new services are deployed within the K8s cluster, thereby elimination the delays associated with the manual provisioning process.
- Besides support for dynamic and automatic configuration of external load balancer, the solution should ideally also have the following attributes:
- Cloud-agnostic: The above process would work when deployed in a single cloud, but to truly make it work in multicloud and hybrid cloud environments, the solution should be available in different form factors, such as physical, virtual and container, so it can be deployed in both public and private clouds. Having a solution that works consistently across the different cloud environments also provides the associated benefit of being able to apply a consistent set of policies for accessing the application, irrespective of the cloud in which it is running. This would lead to a more secure deployment and avoid potential errors in porting configuration from one cloud deployment to another.
- Support for automation tools: The solution should support automation tools, such as Terraform, Ansible and Helm, so that the whole application deployment and Day-N operation process can be automated.
- Flexible licensing model: The solution should offer a software subscription model, enabling organizations to optimize cost by allocating and distributing capacity across multiple sites to adapt to constantly evolving business and application needs.
- Centralized visibility and analytics: Finally, the solution should provide centralized visibility and analytics. This would enable proactive troubleshooting and fast root-cause analysis, thereby leading to a higher application uptime to ensure high end-user satisfaction.
Migrating applications to multicloud and hybrid cloud environments as containerized applications has numerous benefits, including greater agility and operational efficiencies. However, legacy load balancers were built for managing monolithic applications and can be a hindrance in deploying containerized applications, inhibiting access to the full benefits of cloud deployment.
In addition, the use of cloud-specific load balancers can add complexity in managing a hybrid cloud infrastructure. By deploying an ingress controller or connector agent that connects to an external load balancer, IT teams can more effectively simplify and automate the process of configuring the external load balancer as new services are deployed within a K8s cluster.