The founders of Santa Clara, Calif.-based Nimbella set out to address some of the pain points of serverless computing: getting started, developing stateful applications and moving from one cloud to another.
Of the company’s four co-founders, Anshu Agarwal and Eric Swildens originally worked together at Speedera, a content delivery network company acquired by Akamai. Rodric Rabbah and Perry Cheng were co-creators of the open source serverless platform OpenWhisk at IBM, now Apache OpenWhisk.
“We really want to demystify the cloud for all developers, of diverse backgrounds… Once a developer gets a taste of serverless, it is our thesis that it’s an immensely enjoyable and awesome experience because it lets them focus on their value creation and none of the infrastructure and server details that slow developers down,” said Agarwal, Nimbella’s CEO.
The company released its first product in early access in September and plans to make it generally available in January. It’s an integrated solution for building stateful, stateless, JAMstack, long-running, streaming or high-performance serverless applications that works across clouds and on-premises environments. It aims to provide an easy way to migrate from legacy to modern API-driven, serverless cloud software.
“One of the challenges with adopting serverless computing from all the existing cloud providers is it’s just too hard. The power of the model is very evident to somebody who starts using it. And it’s addicting. Once you start developing that way, I don’t think you really want to go back to the old way of developing,” said Rabbah, Nimbella’s CTO.
“We want to just take a lot of that friction and the processes that slow you down out, and make it easy to essentially deploy a function with one click without even having an account on our cloud,” Rabbah said.
OpenWhisk and More
He said OpenWhisk is just one piece of the equation.
“Our company is not OpenWhisk as a Service. But we do use OpenWhisk as part of the technology stack that we’ve put together to allow you to deploy and develop functions in a serverless way. [We] extend the service experience so that you’re not just deploying functions, but you’re building applications which are compositions or orchestrations and functions and APIs in state management and data flow. So that you could do more compelling, more interesting applications that are front end, back end, that are reactive, eventful, etc.,” he said.
The Nimbella cloud is deployed on top of Kubernetes as a way to normalize the variances in the different clouds and on-premises environments.
There are several components:
- A function orchestrator built largely on OpenWhisk that allows you to execute and allocate functions to resources to run them.
- A container orchestrator that allows users to run containerized applications alongside individual functions with the same experience. Yet the platform takes care of the underlying technology, implementation details and infrastructure.
- An orchestration engine partly using augmented open source technology enabling long-running workflows on top of its model.
“That involves essentially having applications be viewed as more classic in the sense that they have a program counter and they have state, they have a stack. And part of managing that [is] the platform’s responsibility, that’s done by the workflow orchestrator,” Rabbah said.
“And then there’s the stateful aspects. So we want applications to essentially have the ability to share state between functions. … This is something that’s actually quite hard to do at user level on existing functions service platforms,” Rabbah said.
He explained it this way:
“Suppose you have a function that’s implementing a web application or e-tail site, you have a concept of a cart, and the cart has contents. You can write your function so that every execution has a three-step process: you read the state from memory, you compute your logic, and then you write the state back to memory. To make things efficient and fast, you want the data to be co-located with the functions.
“When you’re running on a serverless platform, functions have, by definition, transient residency. So as a user, you don’t know where your functions [are] actually running: which pods, which VMs, which geographies, which zones of the cloud. So it’s very difficult for you to do locality optimizations and have your data co-located with your functions.
“So part of our stateful aspect is to actually manage a key-value store and an object store and a file system abstraction that’s co-located with the functions so that you have efficient low-latency communication. And we’re introducing the concept of a memory hierarchy,” he said.
It’s built on open source, using Redis as the key-value store. And it uses S3-compatible APIs for the object store abstractions.
For functions doing machine learning and AI in applications such as neural network models, part of what the engine is doing is managing these blobs of data and making sure it’s accessible to the functions in a read-only mode, so the data only is loaded once. The platform has a state management engine as part of it. … There are lots of components that come together to give an abstraction layer and a middleware that allows you to build applications on top.”
Other parts include:
- Developer workbench complements IDEs and has powerful plugins to enhance the development experience and enable “live” local development of full-stack applications, so you can test directly in the cloud. It recently added a developer playground, which allows users to create, share and publish APIs in Node, Python, PHP, Java, Go and Swift. For the compiled languages (Go, Swift, Java), the Nimbella cloud takes care of the compilation too.
- Operator dashboard provides a single interface to monitor and manage everything, whether you’re using Nimbella as a hosted service and/or for private clouds and dedicated infrastructure.
- Composition engine allows developers to bring APIs, functions and containers together in a familiar programming abstraction.
- Logging engine provides structured logs that are automatically indexed and available from the workbench or routed to existing logging services such as Elastic or Splunk.
- Identity and Access Management (IAM) integration with common IAM tools such as Auth0.
- Cloud director load balancer automatically provides load balancing between public clouds or public and private clouds.
Looking to the Edge
Going forward, the company is looking toward edge computing with a familiar programming experience for developers.
“The ability to run containerless functions in the cloud offers economic and other incentives in terms of being more efficient. … So making that accessible and having an opinionated way of choosing which function should run in which underlying resources at the platform level is important,” Rabbah said.
It’s working on integrations with providers like PagerDuty and Twilio to be able to pull in events from various providers and take action on them. The company recently released in early access Commander, a development platform for building and managing custom Slack applications.
Redis Labs is a sponsor of The New Stack.