Cloud Native / Serverless / Sponsored / Contributed

The Role of Event-Driven Architectures in Modern Application Workflows

11 Sep 2020 9:31am, by

TriggerMesh sponsored this post.

Mark Hinkle
Mark has a long history in emerging technologies and open source. Before co-founding TriggerMesh, he was the executive director of the Node.js Foundation and an executive at Citrix, Cloud.com and Zenoss where he led their open source efforts.

Event-driven Architectures (or EDAs) are composed of loosely-coupled distributed services connected by asynchronous events. These systems are perfect for today’s uncertain times, because new services and data sources can be added or removed on the fly — without disrupting the existing application flows.

Some of the more common EDA use cases include stream processing, data integration and customer journey mapping. If we look at data integration, you may have a master product database on-premises, maybe in Oracle. But perhaps you’ve migrated enterprise resource planning (ERP) to the cloud, and you run Salesforce. An event-driven application flow to integrate these systems would benefit the entire organization so that whenever there is a new price in the product database, this can trigger an event that is consumed by both the ERP system for forecasting and by sales for request for proposals (RFPs) and quotes.

The CloudEvents specification, an industry-led effort, defines Event as:

“A data record expressing an occurrence and its context. Events are routed from an event producer (the source) to interested event consumers. The routing can be performed based on information contained in the event, but an event will not identify a specific routing destination. Events will contain two types of information: the Event Data representing the Occurrence and Context metadata providing contextual information about the Occurrence. A single occurrence MAY result in more than one event.”

EDAs consist of three components: Event Producers, Event Brokers (or Buses), and Event Consumers. The rub is that new research finds a massive 93% of companies operate in a multicloud environment, which can make integrating event sources and consumers from different clouds a big challenge.

Image credit: David Bell

Serverless

Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled and billed in response to the exact demand needed at the moment. Serverless is often confused with Functions-as-a-Service (FaaS), however, they are not the same. FaaS is a cloud computing service that allows users to execute a function or single-purpose program, and that removes the complexity that is often associated with servers and software stacks. FaaS cloud providers include Amazon Web Services Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions, and Oracle Cloud Fn.

Serverless computing does not mean that we no longer use servers to host and run code; nor does it mean that operations engineers are no longer required. Rather, it refers to the idea that developers no longer need to spend time and resources on server provisioning, maintenance, updates, scaling and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams.

Notable characteristics of serverless computing include:

  • Autoscaling, including scaling to zero: Traditional cloud or on-premises applications run — and consume compute, storage, and networking resources — even when they are not in use. With serverless, when the function is not called, all compute and other resources go idle.
  • Usage-based pricing: Hand in hand with scaling to zero, when a function is not being used, you pay nothing. Serverless providers charge per function call.
  • Event-driven: Serverless enables developers to focus on applications that consist of event-driven functions that respond to a variety of triggers.
  • Use Cases: Common serverless use cases include eCommerce, clickstream analytics, contact center, legacy app modernization, and DevOps functions.

How Does Knative Fit in?

Knative is an open source serverless platform that provides a set of middleware components to build modern, source-centric and container-based applications that can run anywhere: on-premises, in the cloud, or even in a third-party data center. As the bolded “container-based” in the previous sentence implies, that is the real difference between the FaaS and Knative-based serverless approaches.

In his excellent talk at NDC London 2020, Google Developer Advocate Mete Atamel described Knative as helping to resolve a previous tradeoff that developers had to make between serverless OR containers. With Knative, you get both — the flexibility of containers with the zero-touch provisioning and fast iteration of serverless.

Knative does two things: Serving and Eventing.

  • Serving builds on Kubernetes to support the deploying and serving of serverless applications and functions. Serving provides the autoscaling — including scale to zero — feature of FaaS, as well as fine-grained traffic control using modern network gateways.
  • Eventing provides the building blocks for consuming and producing events that adhere to the CloudEvents specification (a specification developed by the CNCF Working Group). It includes abstractions from event sources, and decoupled delivery through messaging channels backed by pluggable pub/sub-broker services.

Table 1: Comparison of FaaS and Container-based Serverless

FaaS Container-based
Major vendors / offers
  • AWS Lambda
  • Azure Functions
  • Google Cloud Functions
  • Red Hat OpenShift
  • AWS Fargate
  • Azure Container Instances
  • Google Cloud Run
  • Google Anthos
  • Red Hat Openshift Container Engine
Key concepts
  • Event Sources emit events for a particular service/application
  • Triggers represent what gets executed when an event happens
  • Events are the objects that contain the data and semantics about what happened
  • Functions are small units of code that get executed when events happen, they are a special kind of trigger
  • Serving provides the autoscaling — including scale to zero — a feature of FaaS as well as fine-grained traffic control using modern network gateways.
  • Eventing provides building blocks for consuming and producing events that adhere to the CloudEvents specification.
Pros
  • Simple and fast to write and get started
  • AWS Lambda in particular been around a while and offers a high level of feature-richness and maturity
  • Any language
  • No changes to your code
  • Very portable
Cons
  • May not support your preferred language
  • May require rewriting parts of your app
  • Can lead to lock-in
  • A newer approach to serverless, still maturing feature set

This post is excerpted from the forthcoming TriggerMesh white paper and their guide “What Every CIO Needs to Know about Serverless.” You can download the complete CIO Serverless guide here, which also includes at-a-glance summaries of major serverless options and detailed feature comparisons.

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

A newsletter digest of the week’s most important stories & analyses.