Microservices appear to be the major topic of discussion at this year’s O’Reilly Software Architecture conference in New York, with developers and system designers in attendance curious as to how to transform their monolithic legacy systems into more nimble microservices-driven ones.
For the set of opening keynotes on Tuesday, a number of speakers explained the basics of how a microservices-driven architecture differs from what we’d previously consider an IT system architecture. Key takeaway? It’s all about events.
The idea of Microservices is to break large monolithic apps into smaller sets of coordinating services, so each service can be replaced or scaled up, without the heavy lifting of changing the entire monolith.
Even current enterprise systems are driven around events he said. An airline delays a flight, a pharmacy fills a prescription. A delivery is scheduled. Some events are time-based: An invoice was not paid on time. Events allow separate applications to collaborate: Any state changes within an application could, in fact, be an event, one that could be consumed by another application. A monitoring service could analyze a stream of events emitting from another application, checking to ensure the pattern of events is normal.
Event-driven design is a way of extending applications without modifying them, Richardson explained.
Getting events from one application to another can be done through some sort of messaging software. For internal communications, an enterprise message broker such as Apache Kafka could do the job, Richardson said. For external communications, a form of HTTP-based transport will be needed, such as WebSockets, Webhooks, or a pub/sub mechanism.
While initially, the move to an event-based architecture sounds easy, it does require a certain shift in architecture mindset, noted Cornelia Davis, Pivotal senior director of technology and author of the forthcoming book Cloud Native, in her keynote talk. Microservices, by their very nature, are an extreme form of distributed computing, Davis said.
The traditional server request/response model for computing comes from an imperative programming model, though an events-based model really is more of a functional programming model, she noted. “Functional programming models work really, really well for distributed systems,” she said.
A traditional system may rely on the concept of “retries” should it not initially get all the required material from other services. A web server may not return a requested Web page until all the different elements are in place, with the slowest service holding up the final delivery. In an inherently unreliable distributed systems environment, the abstraction of promises may be a better fit than retries, however. Various components generate their own events, which populate a materialized view for the Web server through a serialized stream of events, or changes. “You can think of promises as event-handlers,” Davis said. An event handler will complete a step “if and when I need to,” she said.
“One of the key things about distributed systems is that you have a whole bunch of independent control loops responsible for their own processing,” she said. One of the favored concepts in this space is Command Query Responsibility Segregation (CQRS), which separates the channels for inserting data into a data store from the channel of querying that data, so that the performance of one is not dependent on another.
“Events can trigger functions, and that is a very natural way of doing functions-as-a-service,” she said.
This view was echoed elsewhere at the conference. Duncan DeVore, software engineer at Lightbend, noted that there are two types of system interactions: Promises and obligations. “You want promises, not obligations. Obligations diverge to unpredictable outcomes; promises converge to definite outcomes,” he said.