Applications have increasingly relied on event-driven architectures (EDAs) in recent years, especially with the advent of serverless and microservices. EDAs decouple an event from the subsequent actions that may follow, as opposed to traditional linear architectures, where an event might be processed in that same code. This decoupling makes EDA processes able to be independently scaled and, while EDA does not strictly require microservices or serverless, the respective loose coupling and the on-demand nature is a perfect fit.
In the cloud native world, the focus might often be on the serverless side of things, with Knative or Lambda taking the spotlight, but, as the name might imply, event-driven architecture is nothing without events. Apache Camel K takes Apache Camel, the fundamental piece of enterprise integration software that first came around as a sort of codification of the 2003 book Enterprise Integration Patterns, and brings it to Kubernetes, providing EDA with a multitude of event sources, explained Keith Babo, Director of Product Management at Red Hat.
“When they created that book, in the Apache community a couple of folks came along and created this open source project called Apache Camel,” said Babo said. “What Apache Camel did is, it took the patterns in that book, and it created its own domain-specific language (DSL) based around those patterns. That made it much simpler to carry out these primitives of integration.”
The patterns Babo speaks of include things like content-based routing, transformation, connectivity, claim check, and more. While Camel could deliver all of that functionality for EDA, it was not built for native Kubernetes deployment, and Babo says that was impetus enough. Camel K, then, is an adaptation of Camel for Kubernetes that installs and manages the lifecycle of Camel via a Kubernetes Operator, he said.
“We thought ‘How do we take this great integration framework, essentially, and build around that and integrate with Kubernetes in a native way, so that you get the best aspects of Kubernetes for scaling, management, monitoring, lifecycle independent delivery, like all of these aspects that Kubernetes excels at? How do we make those native for Camel itself?’ That’s where Camel K came from,” said Babo. “Camel K doesn’t inherently change Camel itself, it introduces the Camel K Operator, and its job is to watch custom resources that are deployed on the platform and then it controls all these elements of lifecycle management.”
Many common implementations of EDA use HTTP and TCP as event sources, but Apache Camel K expands on that list to the tune of more than 300 components.
“In the Knative case, a huge percentage of these serverless architectures are driven by HTTP requests. As we start to scale our usage, we immediately look to some other technologies, like Kafka and messaging and event-driven interfaces, that are actually going to decouple the producer event from the downstream processing of that event,” explained Babo. “We have the ability to deploy a Camel K connector, and have that essentially serve as an ingress point for Knative and serverless. The event sources that Knative can use, Camel K can natively integrate with that to connect Knative or serverless architecture to any upstream system that you want to generate or pull events from.”
Beyond acting as an ingress for Knative, Babo said that Camel K also goes beyond what is offered in the Knative spec, by acting as an egress. Camel K can perform what he referred to as an “event sync,” wherein after accepting some data from an external system and processing that data with a serverless architecture, it can then be routed elsewhere.
“That’s a unique aspect,” he said. “Camel K helps with that. You can use those connectors, not only for ingress or inbound data and events, but you can also use them for egress and outbound data as well.”
Babo further explained that, while the core concepts of EDA haven’t changed with the introduction of microservices, the application of EDA patterns has evolved based on new architectural approaches.
“One of the major evolutions is that a lot of event-driven constructs like Java or .Net events, or triggers, or message-driven beans exist inside one process space, so you can’t subscribe to them from other microservices. That’s really the gap that technologies like Apache Kafka help to fill,” said Babo.