For large enterprises, serverless often seems like a tech opportunity that remains just out of reach.
For many, a dependency on legacy infrastructure, an established internal culture that favors management over autonomy, and a risk aversion to the cloud can seem to take serverless off the table as a potential option.
But, still, large enterprises are trying to figure out how they can adopt serverless patterns within their existing context, mostly due to increasing needs to analyze big data streams in real time.
Asaf Somekh, CEO of Iguazio, makers of Nuclio, sees their key value proposition as enabling the enterprise to take up serverless for exactly that use case. “We help enterprise to build the applications that translate real-time data. We help them combine internal and external services and put AI on top, and then move more towards a platform approach, something that has previously just been the luxury of the tech giants,” said Somekh.
Iguazio’s Nuclio offering is an open source serverless platform which works with Docker, Google Container Engine, Azure Container Service and Kubernetes.
Somekh says when iguazio built Nuclio, they set out to answer a key question for enterprises wanting to migrate to serverless: “How do you plug in the logic in the best ways in serverless?” He said the answer is basically, let the platform do the work. “You set your triggers, you let devs focus on the business logic and the rest should be done by the platform: the tight coupling with the data services, and the autoscaling. When you mix serverless with data services, the time to market with applications is so much shorter, it is a matter of weeks rather than months. Sometimes days rather than weeks,” he said.
“It wasn’t that cool three years ago,” Somekh joked. “So we decided to make Nuclio open source so we could create a community around it. We are now getting great contributions from large players, for example, Microsoft are making Azure triggers.”
Somekh gives two use case examples of those using Nuclio while transitioning to serverless: ride-hailing companies and telcos.
Ride-hailing companies are collecting data from passengers, potential passengers, drivers, and from the vehicles. As ride-hail companies scale, they need infrastructure in place that allows them to collect that data, move it from one collection point to a data center, transform it, and analyze it: all in real time so that they can allocate drivers and collect passenger payments. It also allows ride hail companies to distribute their drivers to areas with more demand than supply so that they can ‘load balance’ their physical network of cars and drivers across a city. “Monetizing real-time data and reacting to events can be a lengthy process without serverless and by the time you can do something with the data, the value is too low,” said Somekh.
For telcos, they have a need for real-time monitoring of the network infrastructure. “With all those networking monitoring functions, dealing with a DDoS attack, for example, can become very reactive when a network gets overloaded,” Somekh explained. He said that because of this, a lot of telcos want to introduce AI into their platforms so they can monitor their network infrastructure and apply predictive algorithms. But that requires them to ingest sampling data, such as Netflow data, collected from network devices, compare that data against historical records of network failures and attacks in the past, and then predict risks that will happen within an hour or next couple of hours. “That allows network operators to prevent the failures by changing their configurations: it is predictive maintenance on a network.”
In both instances, cloud services can be used to train the algorithms. Then a serverless workflow can be put in place close to the edge where the data is being collected, and that serverless workflow would apply the models for analysis (in the case of ride-hailing) or prediction (in the case of telcos) as the data is collected.
Once this infrastructure is in place, said Somekh, it is easier to then keep collecting data and update the predictive or analytical model every three to six hours. “It depends on the statistical variance, but there is not a lot of overload in uploading an updated model,” said Somekh.
To help enterprises get started, Somekh said the first stage of the serverless onboarding process involves mapping an enterprise’s current infrastructure.
“Typically we map with our enterprise clients all the current APIs they are using. Are they using Amazon data services like Kinesis for streaming, DynamoDB for key-value storage, etc? They can use the same APIs with Iguazio. They don’t need to move the data from repo to repo. Usually, it is the developer’s responsibility to move the data, when it has been captured, to Dynamo and then it has to be transformed. Then it would need to be moved to S3 storage. We train our customers that they don’t need to move the data. If it is on-prem, there is a different set of APIs that might be more common in on-prem deployments. But there are always connectors that come on top of our platform, and we help them connect using those.”
Mika Borner, a Data Analytics Management Consultant with the Swiss LC Systems, said in his work with large enterprises in finance, automotive, pharmaceutical, and industrial manufacturing, the trend now is towards companies wanting to stay on-prem with their data centers, but move towards hybrid cloud solutions, perhaps a little more tentatively than their American counterparts. This is exacerbated by national regulations in Germany and Switzerland, both of which also require that data centers remain in the country to manage financial data of customers. (That makes it especially difficult in Switzerland. In Germany, Microsoft and Amazon both have cloud hosting products that guarantee data stays in the country.)
“Some companies are a bit conservative, they keep most of their most important data on-prem in order to maintain both a regulatory and risk assessment point of view,” said Borner.
“Our customers are looking at tech like AWS Lambda, but these serverless options are not available on-prem. But companies still want to use new technologies, so they are looking at tech like nucleo and other serverless frameworks,” explained Borner.
Borner says with some regulatory restrictions, the enterprise cannot move large datasets to the cloud, so they need serverless patterns closer to the edge, where the data is produced.
Borner cites one common industry example from pharmaceuticals and industrial manufacturing. Both are fitting cameras to their product lines and want to be able to take photos of pills or new components as they are made, compare the photo of the new creation with a quality example of best practice, and split off those that don’t make the quality cut.
Borner says when helping large enterprises with this type of migration to a serverless workflow, the first task is to understand the use case in detail.
“Our enterprise customers know their processes better than we do. So we try to understand what they are doing,” said Borner. “Then we move into requirements gathering. We agree on their top goal, for example, cost reduction. They look at their restrictions, for example, their current technology, how much data is being collected, the sensitivity of data. Serverless is not a hammer for all problems, there are other ways to do analytics, so we want to see what they need for each use case.
Borner says a general rule of thumb is that when a company is talking about measuring the quality of the components (or pills or products) that can be produced, they listen out for whether the company’s goal is about numbers being created in one day or components being created in milliseconds. The first goal relates more to batch processing techniques, while the second is about real-time data analytics, and therefore more suited to a serverless model.
“As soon as we go to a real-time process we go to serverless,” said Borner. From there, LC Systems starts creating a proof of concept (PoC) or pilot example. They are often in the position of having built trusted relationships with their enterprise customers, so while there is some costing done, this is fairly raw at this stage as the enterprise expects the PoC to provide more information on what an end solution would cost in production.
“A PoC is about showing serverless’ technically feasibility, but we work closely with the customer so they understand the technology going forward,” said Borner. “We usually have to do a lot of white-boarding to explain serverless, we draw a lot of pictures, we give them some resources, we have some presentations,” he explained. With any serverless migration project at present, whether internally managed or via consultants, the lead champion will need to take on an educative role to assist team members to understand serverless.
In fact, Borner said the biggest challenge in any migration project is the difficulties when team members are not understanding each other. This is especially crucial in an industrial enterprise setting, where deadlines are often not negotiable.