LaunchDarkly is sponsoring our coverage of KubeCon+CloudNativeCon 2021.
When Raj Nair founded load balancing company Arrowpoint Communications 20 years ago, load balancing was done based on some simple formulas. He sold Arrowpoint to Cisco in 2000 and went on to found a video delivery company that he sold to Ericsson.
Two years ago, he returned to the load balancing challenge, this time as the founder of Avesha.
To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021.
Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago.
For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts.
“There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.”
A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance.
“It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.”
That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times.
The resulting service, the Smart Application Cloud Framework, was launched Tuesday.
Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks.
The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies.
The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly.
“It is continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.”
It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere.
And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations.
“The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.”
People wind up guessing, he said, set some basic targets, and then leave them in place.
“You end up wasting resources,” he said.
The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions.
“You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.”
In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches.
When some clusters have hundreds of thousands of nodes, 30% is a big number, he said.
Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night.
And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said.
For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.”
There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar.
“Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.”
Other large cloud native technology companies with AI expertise may have also built their own platforms, he said.
“Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.”
There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content.
Avesha’s approach is more general, supporting connections between individual microservices.
“It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.”
AI-Powered Load Balancing as a Feature
At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene.
“One way or the other, you’re going to be able to take advantage of it,” said Nair.
Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer.
But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added.
Avesha is currently working with several companies on proof of concept projects.
These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom.
“We have also engaged with some partners who are big cloud players,” he said.
More information, as well as return on investment analyses, will be released in the next few months.
Verizon and AWS Serve Doctors at the Edge
One case study that has been made public was a joint project by Verizon and AWS to help doctors to detect and identify polyps in real-time.
“The Avesha platform is able to connect the procedure room with the inference models on high-performance GPUs at the cloud edge and the cloud backend that continuously updates the models, resulting in low latency performance with very high accuracy,” the companies said in their report about the project.
KubeCon+CloudNativeCon is a sponsor of The New Stack.