Gwen Shapira, a product manager at Confluent, has been moving data around for over 20 years. Co-author of “Kafka: The Definitive Guide,” Shapira regularly presents on stream processing, Apache Kafka, data integration and event driven architectures. At Serverless Days NYC recently, she shared her thoughts on serverless, seeing great benefits for developers not having to worry about scalability as the serverless cloud provider will manage scaling the architecture, allowing developers to focus on their applications.
But for developers working in serverless, Shapira currently sees the technology as akin to a fixie bike. “Hipsters love these bikes because they are simple, but then you hit a hill and you understand why you need gears,” Shapira says.
For Shapira, managing state is that first hill. State refers to data that is fixed in a point of time. Normally, a serverless workflow is viewed as having pulled in data in one state, to do something, and then, possibly, return the data in a new state at the end of the process.
For example, when media images are taken into a serverless workflow and transformed so the original image is processed into a thumbnail, a standard output size for media use, and in other formats, and those images are then stored back in an S3 bucket, that workflow has not had to manage state as the images started in one form and ended in another. But as serverless begins to get used for multiple business workflows, there is a need to receive data, transform it and then use the transformed data in a subsequent process. That means these more complex serverless processes will be required to manage state. She explained: “Stream processing in serverless, where data may be stateful, makes you ask: do I have the tools and background to deal with this complexity?”
“In serverless, you need the database to be very fast,” said Shapira. “You need the database to scale as well as the functions and keep the same pay-for-use model. But when you look, you find out that there are exactly zero databases with a pay-for-use model. They mostly have a dual payment model: you pay for the storage, and you pay for use. So there are very few that will scale as well as your serverless functions, and low latency is a pretty big bottleneck. Having to go into large databases with multiple tables, get data, then push another update… that becomes a pretty significant overhead.”
Shapira gives an example of an order request. The order would need to check the warehouse to see if there is enough inventory, then lock that inventory to the order so no one else takes it, update the inventory to show the reduced items, and then move that order to shipping. If there aren’t enough items, the order has to be updated. In that sort of situation, it is not as simple as just relying on events that trigger functions: there is a degree of maintaining the state of the data and updating it in the one workflow. In serverless, just choosing a database that allows state may not be a good enough solution.
Shapira points to three ways to solve this problem:
- “You can create a highly normalized data model so that you can pull everything you want in the one call,” she said.
- “You can read more data than you strictly need to,” she suggested. “For example, if you can predict how a function is going to get reused (by using Kafka), then you will understand that if you want one order, you will usually need a lot of other orders, so you can pull also of it and cache. That is extra useful because it saves you time and money in serverless.”
- “The last pattern is not really possible at the moment,” Shapira warns. “Ideally, serverless cloud providers would allow functions to get updates. Events are not just the actions, they represent a new set of data. But if my database keeps getting updates while my events are still running, I have no way of getting those updates. This is a limitation from all function providers, but something that Kafka updates have been really good at since the dawn of time. So it’s not possible yet but life would be awesome if this was possible.”
Shapira encourages developers to step back from their code to consider the real world application of their work. “A function is an action to take. But this event is also information. When an order is taken, for example, the fact that this happened means I may need to update the data store in addition to reacting to it. For example, the inventory just got updated, so I may need to alert a customer that the item is now available and then that may mean I also need to update the inventory warehouse.”
Serverless, like API-enabled architectures, require a closer connection between the business and technical sides of an organization.
Shapira expands on this closer alignment: “When you write code, you focus on the function as the first class object you are working on. The function is the center of the world. But the real thing is the event that is happening in the real world. Sometimes you need a function to handle them, sometimes you need to focus on the database.”
“In serverless, you need the database to be very fast.”
Shapira is excited by the possibility that serverless brings to managing these problems, seeing this as a new way to strip away the non-essentials and focus on handling the real business problem. But she also suggests there is a lot of wisdom to draw from what has already been thought through, especially around event-driven design models in microservices architectures and applying them to serverless. “Event-driven microservices are quite similar, so learn those patterns, they are often fully applicable in this slightly new domain,” she suggested.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real.