Triggermesh sponsored this post.
An event-driven architecture (EDA) allows developers to create dynamic applications that inject speed and responsiveness into business processes, helping companies operate more efficiently.
In this manner, EDAs have sparked interest among organizations pursuing the benefits of digitally transforming through the use of modern, cloud-focused application development processes, such as DevOps, and technologies, such as containerized Kubernetes applications.
As its name indicates, an EDA revolves around the generation, transmission and processing of application events, which in turn trigger actions in other applications and systems throughout the organization’s infrastructure asynchronously.
“If a customer makes a purchase, that ‘event’ is published and/or streamed so that it can be consumed by services which are interested in knowing about events of that kind, such as stock control, accounting or fulfillment,” said William Fellows, research director at 451 Research, part of S&P Global Market Intelligence.
A Modern Option for the Message Bus
EDAs have emerged as a distributed, scalable and more resilient alternative to what Tom Petrocelli, research fellow at Amalgam Insights, calls the “monolithic message bus.” “We’re re-architecting the methodology for passing messages around that has existed for 30 years,” he said.
EDAs are particularly ideal for high-throughput transaction applications, such as those used for e-commerce. “You can call them events, transactions, or messages: It’s all the same thing,” Petrocelli said. “It’s some piece of data that has to get from one place to another, and you can’t risk losing any. Event meshes are designed for that environment.”
Because services in EDAs are loosely-coupled, they are particularly suitable for cloud native microservices architectures in which components can be independently developed and deployed only when needed, Fellows said.
The impact on the business can be significant. “EDA has helped organizations be more efficient, and improve agility and response times, which supports better customer experiences, whether those are internal or external,” Fellows said. Ideal use cases for EDAs include payment processing, website monitoring, IoT, real-time marketing and fraud detection.
For DevOps teams, EDA allows developers to be more agile, especially when creating or updating applications that need to consume data in a timely fashion in order to update or trigger other events, he said.
“Event-driven applications are composed of microservices which can independently scale up and down when required, while the business logic is handled by an event bus which handles event dispatch and consumption,” Fellows said.
EDAs also help control cloud computing costs, according to Petrocelli. “In this cloud native world, I’m paying for the time that things are running,” he said. “If something can be not running and only wake up when needed, especially services that aren’t called upon constantly, then I can save some money.”
A Look Ahead
Looking to the future, Petrocelli foresees improvements in the methodologies EDAs use to find alternate routes for their messages if the ideal path is for some reason broken. “That alternate path optimization is an area where we’re going to see a lot of growth,” he said.
Petrocelli also forecasts improvements in EDA performance optimization, because speeding things up is a big part of their value proposition. “You’re going to start to see them, I think, become a little more lean, a little more optimized than where they began,” he said.
While EDAs are considered to be relatively easy to develop because they can use whatever languages and technologies are best suited to the job, they can get harder to control as the number of events triggered by something happening grows, Fellows said.
“Moreover, events created within a particular system or application can often only be detected, consumed and acted on within that system. There is currently a great deal of innovation focused on creating integration and transformation services that offer a bridge between different systems,” he said.
The EDA journey is from a batch-driven world, where application changes are reported, for example, once per day, to real-time, event-driven applications, according to Mark Hinkle, CEO of TriggerMesh, a cloud native integration platform provider for enabling EDA.
For example, a Zendesk ticket could be forwarded to an AWS Comprehend natural language processing service in order to understand the sentiment, and take different actions depending on whether the customer is angry, happy or indifferent. Or if there’s a data set uploaded to your cloud storage, the blog object-store change can kick off a Hadoop MapReduce workflow hosted on Kubernetes against the data set, Hinkle said.
TriggerMesh acts as a broker in EDAs, allowing developers to create automated workflows between cloud services and/or on-premises applications. It consumes, routes and transforms events — helping its customers reduce the latency between what’s going on in their business and when they know about it — for example, by having real-time inventory adjustments and updates.
“Because it’s cloud- and infrastructure-agnostic, TriggerMesh can consume events running in your data center and coming from your enterprise service bus, and forward them to the cloud to trigger an action, like a Kubernetes workload,” Hinkle said.
Another example: If you have many security events coming from Microsoft Azure, you can use TriggerMesh to forward only the critical ones to Splunk, keeping storage costs down and making data analysis easier, according to Hinkle.
“The promise of cloud native is the ability to integrate cloud services, SaaS and on-premises applications into powerful custom flows that meet your businesses changing needs,” said Sid Rabindran, Director, Technology Partners and Programs at Confluent, a TriggerMesh partner.
The Confluent and TriggerMesh partnership allows customers to integrate events directly into their cluster from any number of popular sources, including Slack, AWS, GitLab, GitHub, and Azure.
It also enables them to build intelligent application flows using any service, running in the cloud or on-premises, benefiting enterprise and cloud developers, architects, and technology strategists interested in hybrid cloud, multicloud and cloud native application flows, according to Rabindran.
GitLab partnered with TriggerMesh to provide its users an easy way to deploy serverless workloads into their Kubernetes clusters from GitLab using Knative, according to Daniel Gruesso, Product Manager, Source Code at GitLab. Additionally, TriggerMesh created and maintains the official GitLab event source for Knative.
“The tools and knowledge provided by TriggerMesh allowed us to quickly ship off our first iteration of GitLab Serverless. We’ve had very positive responses from both the open source community as well as Knative project maintainers,” Gruesso said.
The TriggerMesh team has deep knowledge of serverless architectures and were one of the few teams with experience in Knative. Plus, they have a great professional disposition and share GitLab’s open-source DNA, he said.
“They create tools that make it easy to interact with different serverless technologies. Their team is always at the forefront of various serverless solutions from different vendors,” Gruesso said.
TriggerMesh Cloud Native Integration
This month, TriggerMesh launched the production-ready version of its integration and automation platform. Called TriggerMesh Cloud Native Integration Platform 1.0, the offering was beta tested by more than 500 users and can support the most demanding enterprise workloads.
It’s designed to allow users to integrate services, automate workflows, and accelerate the movement of information across the organization. By helping organizations to build event-driven applications out of any on-premises application or cloud service, TriggerMesh boosts digital transformation efforts.
For example, a bank that beta tested the TriggerMesh platform wanted to automate its governance. Today, they have multiple systems watching for anomalous activities, such as more than three wire transfers from an account in a single day.
“They take those governance policies, look at the systems, and if something happens, they flag them and open a ticket in their compliance databases,” Hinkle said.
The bank runs a set of serverless functions that enforce the governance, and those functions are triggered based on event counts coming from their systems. The output flags them in the database, either by either locking the account, or flagging them for review.
“In that case, the systems that were siloed and not integrated, are now being integrated and managed by serverless functions. And we’re providing the conduit for these functions to be triggered,” Hinkle said.
Key features include a declarative API; event transformation to forward events from one format to another; and new bridges to automate infrastructure and bridge workflows between Salesforce, OracleDB, Zendesk, Datadog, Oracle Cloud Security Logs, Splunk, and many more.
Its use cases include EDA automation, where, using the TriggerFlow technology, users can connect system events and trigger workloads using rules-based logic. TriggerMesh can also complement robotic process automation (RPA) technology, and help with log data flow.
TriggerMesh Cloud Native Integration Platform 1.0 is offered in two SaaS deployment options, self-administered on Kubernetes or as a fully managed service on TriggerMesh Cloud.
Looking ahead, Hinkle said TriggerMesh plans to increase its contributions to open source projects, in particular by adding connectors for both consumers and producers of events, what he calls a sort of “plug-in architecture.” He also envisions creating more management capabilities for event-driven workflows.
“There are also opportunities for us to automate workflows using machine learning and artificial intelligence,” he said. “That’s a long term vision, but we could potentially do that.”