The emerging list of what use cases are best suited for serverless — cron jobs, data and media processing, and ETL — may be about to change significantly if new player Binaris is able to build some market share based on its still-in-alpha platform.
Binaris is an independent functions-as-a-service platform, focused on providing a predictable, low latency serverless option.
CEO and co-founder Avner Braverman argues that at present, current serverless options don’t allow a predictability in latency times. He says latency is measured at the 99th percentile — where invocation response is guaranteed to be returned within a specific timeframe, like 3 milliseconds, 99 times out of 100. Averages mean little because with complex systems, a significant part of responses will take much longer, and that means real-time use cases for serverless are off the table.
“If you want to make a game, serverless is a great fit, if it can respond in real time,” Braverman said, as a lot of the data and compute needed for gaming exists only while the user is active. Other sectors, like ad bidding, and in emerging areas like augmented reality, would also be well suited to serverless use cases, if (and Braverman stressed it is currently a big if), serverless was more reliable with a response.
The industrial internet of things is another key use case area, said Braverman, who points to the need in an autonomous car environment, where in addition to individual car management, there is also the local urban scale, where functionality like managing traffic congestion, or hazard alerts, requires decision making in logic running within individual cars or on devices, at some close-by edge fabric, and some logic running in cloud data environments. All this could be managed by serverless functions if latency was addressed. And the model could be repeated in other IoT areas like rail infrastructure and drone fleets.
“We want to offer a guarantee of speed. Our first 99 percent guarantee will be around three milliseconds latency under load. Today other serverless platforms offer no latency guarantees but measure around 50ms for single function invocations and seconds under load, which will make us over 100x faster. That is what we are targeting for our first generation,” said Braverman.
Braverman said that if you look at the current performance of responses on the main four serverless platforms, it is widely variable.
To demonstrate, Binaris has built the open source Faasmark project as a benchmark tool. It was built to measure function invocation latency on the different FaaS/serverless platforms with tests based on invocation method, programming language, memory size and load conditions.
Using Faasmark, Braverman demonstrated with a logarithmic graph of results on serverless platforms, warm start idle (that is, one invocation at a time) 99 percent latencies were measured in intervals over 24 hours.
“Our analysis shows you get massive changes for different time slots, overall the platforms are not as predictable as you would want,” said Braverman. “And the deeper the chain call, the more unpredictability in response time you have.”
Braverman, his co-founder and team come from a background working on Israeli defense projects, where they needed to utilize high-performance computing techniques to managing major engineering projects. He says these skills have been crucial in building Binaris.
“This is a massive engineering effort. There is a lot of heavy lifting in getting these latencies down. We are working on deep technologies to get us there,” said Braverman. Binaris used the recent Serverlesssconf in New York to come out of stealth mode and get the attention of those who might be thinking they can’t use serverless for their use cases. “We have been to every serverless conference and we have seen the conversation is moving towards seeing cold and warm latency as key issues. Now, everyone is saying, yes, we know latency is a problem. The conversation is definitely steering that way, so we stuck our head out of stealth to drive the conversation and let people know what we are doing.”