Supercharging Event-Driven Integrations using Apache Kafka and TriggerMesh

Event-driven integrations give businesses the flexibility they need to adapt and adjust to rapid market and customer preference changes. Apache Kafka has emerged as the leading system for brokering messages across the enterprise. Adding TriggerMesh to Kafka provides a way for messages to be routed and transformed cloud natively across systems. DevOps teams, like the one at PNC Bank, use the TriggerMesh declarative API to define these event-driven integrations and manage them as part of their CI/CD pipeline.
Event-Driven Architecture Basics

Many modern applications are rapidly adopting an event-driven architecture (EDA). An EDA is used to loosely couple independent microservices and provide near real-time behavior. When coupled with a cloud native mindset and the use of containers and serverless functions, EDA modernizes the entire enterprise application landscape.
Over the last 10 years, starting with the DevOps movement, great emphasis has been put into gaining agility and reducing the time to market for new applications. Racing from development to production is seen as a true competitive advantage. With this in mind, breaking monolithic applications into microservices has been seen as a way to deploy independent services faster, giving each microservice its own lifecycle. Packaging each microservice and managing it in production gave rise to the container and Kubernetes era. However, coupling each microservice is still an open problem and that is where EDA comes back in full force. When you adopt an event-driven architecture, you can couple together your independent microservices through a messaging substrate like Kafka; and so gain the agility and scale you have been looking for.
Decoupling Your Application
Decoupling the components of an application into microservices enables them to be deployed independently of each other, meaning that they now have a separate lifecycle — they can be developed, packaged, tested and deployed through separate CI/CD pipelines. The advantage of this is that developers can revise their own system without needing to change any logic in any of the other microservices that make up the entire cloud native application. Essentially, loosely-coupled microservices are the libraries of cloud applications, with the benefit of them not having to be recompiled into a monolithic application.
EDA and Containers
Event-driven architectures consist of three main components: producers, consumers and brokers. The producers send a message to the broker when an event occurs (e.g. an update to a database). Consumers receive an event from a broker and take some action (e.g. runs a serverless function that does some type of ETL operation on the database).
The difference between a message and an event can be confusing at first. An event is a notification that a state has changed. A message, however, contains additional information that represents more than just a notification. There is additional data associated with a message. Eventing is like a phone call, but it doesn’t tell you who is on the line or what their message is. A message provides the details of the call — e.g. who called and a transcription of what was discussed.
Producers aren’t affected by how the events they produce are going to be consumed (so additional consumers can be added without affecting the producers). Consumers need not concern themselves with how events were produced. Because of this loose coupling, microservices can be implemented in different languages or use technologies that are different and appropriate for specific jobs. This means that containers are the perfect packaging mechanisms for microservices; and in our EDA context, containers are the perfect packaging for producers and consumers of events and/or messages. Increasingly, we will see that cloud native applications managed in Kubernetes will be a set of producers and consumers of events with a Kafka messaging substrate, running in Kubernetes or in a Cloud service like AWS MSK or Confluent.
CloudEvent Specification from CNCF
Imagine now that these messages are provided by different services, whether they are cloud services or on-premises applications. That makes it difficult for systems to understand messages from one system to another. Not only do you need the transformation of messages, but you also would need to have a common understanding about the metadata of the messages.
That’s where a standard, or at least a specification, comes into play. CloudEvents 1.0, a specification championed by the Cloud Native Computing Foundation (CNCF) provides a common way for cloud providers to express events. The spec says that an event is expressed in a specific format and that data needs to have certain fields. So, it needs to have a timestamp, a subject, a source, and a type.
There are an increasing number of systems that are implementing event-driven architecture. Cloud providers see this trend and they all provide a messaging substrate with their own specific features. Amazon Web Services (AWS), for example, provides Kinesis for processing event messages in real-time. Other solutions, like the open source distributed event platform Apache Kafka, can integrate with virtually any system and be deployed on any cloud or on-prem. Popularity of Kafka (and its commercial version, provided by Confluent) has grown rapidly since its development at LinkedIn. It is now in use in most of the Fortune 100, as it starts displacing enterprise service buses like Tibco or Mulesoft. The reason for this growth is because Kafka provides a way to provide a highly scalable and real-time message stream, to share events that can be used to power event-driven applications across the cloud and the enterprise.
Supercharging Kafka
Kafka is an exceptionally good system for brokering messages and supporting EDA, but that is only the first part of the equation. At TriggerMesh, we have discovered that providing a way for those messages to be routed and transformed into more meaningful events — that can be exchanged cloud-natively — is extremely valuable. As Kafka gains popularity, the ability to do more sophisticated things with those real-time event streams is rising. The ability to consume, route and transform event streams into useful messages (not just from Kakfa, but all cloud providers) is the key to long-term success. What we mean by dealing with events cloud natively is that the event flow that happens in your application needs to be described with a powerful declarative API. Kubernetes has shown us how to manage applications at scale with a declarative mindset; and doing it for EDA is the way to “supercharge” Kafka.
GitOps Meets EDA
Imagine being able to define your event producers, consumers, your transformation, your event stores and your routing tables with a declarative API. You would be able to use the same DevOps mindset with your own Event-Driven Applications that you adopted to decouple your monolithic application. This means that in your version control system, you would have the representation of your event flow; and that any change of the declared state of the EDA would be automatically reconciled in your live system.
One example is our customer PNC Bank, which is using Apache Kafka. The bank’s project team events or messages coming from all sorts of different sources, like Jenkins and Bitbucket. They push every message to Kafka. They saw a need for a cloud native integration platform like TriggerMesh to add meaning to events, while leveraging the event streaming capabilities of Apache Kafka under the hood. Reason being, they had a set of microservices that needed to be triggered on-demand when certain events happened. TriggerMesh provided them with a declarative way to define their event-flows without having to go deep into the Kafka configuration: no Java coding, no Kafka connect configuration, no specific language SDK to produce or consume messages. They adopted the CloudEvent specification, their microservices simply consumed and produced cloud-events, and TriggerMesh provided the wiring with a declarative API that allowed them to keep using their GitOps pipeline.
PNC Bank understood the advantage that this API-driven mindset of integration fits well with their DevOps groups, because they can manage the integration the same way that they manage their microservices application. The TriggerMesh declarative API was easy to integrate by the DevOps team into their pipelines.
Conclusion
TriggerMesh abstracts event brokers, event sources and event sinks. For brokers, you can swap whichever message streaming technology you want. You can use Kinesis, Kafka, Google PubSub, NATS, and/or others.
TriggerMesh harnesses the events flowing at the underlying broker and extends them, so they are ready to use for new scenarios (e.g. real-time streaming from the cloud providers, Apache Kafka, or Enterprise Service Buses). The use case we run into most often is that TriggerMesh provides a way to extend open source Apache Kafka. With Kafka Connect, there’s much more low-level sysadmin work and installation configuration. Whereas with TriggerMesh, since it is fully API-driven, the developer stands back and directly interacts with an API and defines a desired state.