Cloud Native / Edge / IoT / Kubernetes / Contributed

Critical Factors for Managing Applications and Kubernetes at the Edge

14 Jun 2021 10:14am, by
Mohan Atreya
Mohan Atreya is the vice president, product and solutions at Rafay Systems. He is a seasoned product professional with over 20 years of experience. Prior to Rafay, he led senior Product Management positions at Okta, Neustar, McAfee and RSA. He earned his Masters in Engineering from National University of Singapore. In his spare time, Mohan enjoys spending time with his family and tinkering with his telescope.

Business is being driven to the network’s edge because of several advantages it provides. Autonomous vehicles, remote asset monitoring, in-hospital patient monitoring, and real-time defect detection in factories are just a few examples of business applications that leverage the responsive performance, scalability, and reduced latency found at the network edge.

As 5G wireless finds ubiquity, and as more connected devices on the Internet of Things (IoT) begin using wireless communications, data volumes and data rates are also increasing. While these two factors are somewhat independent, together they increase the demand for applications on the edge by orders of magnitude.

This demand for speed means that the old model for a central database slowly reacting to application queries from a variety of sources is now being replaced with both applications and data located at the network edge where they can respond quickly to a vast flow of inputs. Containerized, microservice applications that support this flow must be where they can handle it, which means that they, too, must be at the edge.

Kubernetes is the industry’s tool of choice for container orchestration, however, when moving containers to the edge, additional Kubernetes management complications appear. Deployment, security, and fleet management processes all become exponentially more complex given the number of clusters that need to be managed is now measured by the hundreds.

With these limitations and challenges in mind, there are several Kubernetes management  considerations that are important when operating at the edge to ensure your applications are indeed cutting-edge and not bleeding-edge:

  • Automate Deployment: Manually deploying applications to tens or hundreds of locations is a non-starter. Automation is a requirement as the only effective means of deploying potentially thousands of applications as well as distributed Kubernetes clusters. GitOps is taking over the world and it’s great for the edge too.
  • Leverage Zero-Trust Security: Edge clusters are essentially mini-clouds. Plan on using zero-trust security concepts throughout your operation to access them using standard authentication and authorization processes. Done properly, you can leverage your company’s central directory for authentication. In addition, all data flows must be encrypted to ensure secure communications among the parts of the complex Kubernetes infrastructure.
  • Enforce Cluster Blueprints: Given the number of clusters that need to be managed, it’s important to create and manage standardized cluster configurations in a centralized manner. Otherwise, cluster configurations change over time and quickly become too onerous to support, or they no longer comply with internal policies and industry regulations.
  • Operate “Fleets” Instead of Individual Clusters: Given the number of clusters at the edge, it is impractical to perform operations one cluster at a time. Organizations should be able to assign labels to clusters and be allowed to perform bulk operations across the fleet. For example, (a) upgrade all clusters with the label “Europe” or (b) deploy the workload to all clusters with the label “California.”
  • Leverage Multitenancy: It may be impractical and operationally cumbersome to manage the fleet of 100s of clusters in a flat, monolithic hierarchy. Consider organizing the clusters into logically isolated, operating domains instead. For example, all clusters in the Western U.S. in a dedicated project and all clusters in the Eastern U.S. in a logically separate and isolated project.
  • Separation of Responsibilities: To operate a large fleet of clusters with 100s of distinct workloads, it is critical to implement clear separation of duties using fine-grained, role-based access control, so developers can have unfettered — but audited — access to development environments, while operations teams can manage the entire fleet.
  • Consume Kubernetes Management as a Service: Managing clusters is hard; managing the solutions that help you manage those clusters isn’t easy either. Consuming Kubernetes Management offerings as a service will reduce the burden your teams will otherwise have to carry just to get the supporting infrastructure up and running.

The edge is often thought of as being a “serverless” computing environment, but that belief doesn’t reflect reality. What makes the edge effective is that it moves the servers and applications closer to the place where the need exists. Placing the servers, along with their applications and data, at the edge as well is a matter of pure efficiency and results in better performance and better user experiences, and we’re particularly seeing this trend in retail and manufacturing verticals.

With that efficiency comes a much higher level of management overhead given the number of containers and clusters that need to be managed. Taking the above steps can make that complex environment efficient and secure and unlock all the promises edge applications promote.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.