The CloudEvents Spec Seeks to Bring Uniformity to Event Data Descriptions
Lacking a common way to describe events, developers have had to constantly relearn how to deal with events from different systems. It’s also inhibited the development of libraries, tooling and infrastructure like SDKs, event routers or tracing systems for delivering event data across systems.
CloudEvents grew out of the CNCF Serverless Working Group, though the event data in serverless is no different from any other infrastructure, according to Doug Davis, senior technical staff member at IBM and co-chair of the Serverless WG and the CloudEvents project. CloudEvents was originally accepted into CNCF as a Sandbox project in 2018.
“The event data itself is not service-specific. Anytime you are sending a message that is represented in events, CloudEvents has a role there,” he said.
The specification is designed to improve the interoperability of event systems and portability across services that produce or consume events, enabling them to be developed and deployed independently.
Every system that generates an event typically has its own format, its own way to label things, its own naming conventions. But in a lot of cases, when it comes to basic event processing, middleware-type products like event gateways operate on semantically similar data — things like the type of event, the stream, whether it is a push versus a pull, who produced the event.
“We’re not going to try to force every event out there to conform to our way of naming things,” Davis said. “This is extra metadata outside of the normal event. Think of it as HTTP headers. It’s outside the body, but it’s in a standard location and a standard format, so [users] will know where to find it if they want to do basic middleware routing, filtering, stuff like that.
“That’s what CloudEvents is trying to do — augment or annotate your existing event with well-defined metadata in a well-defined location so the middleware can get its processing done without having to understand every single event that flows through the system.”
It’s meant to be a minimal set of information needed to route a request to the proper component and to facilitate proper processing of the event by that component — though it contains no routing information per se. That data is found elsewhere.
The project started by defining a core set of metadata — ID, source, specversion and type — and optional attributes including datacontenttype, dataschema, subject and time. It since has followed with extension attributes and more. The data package is expected to be 64KB or less.
CloudEvents also provides a specification for how to serialize the event in various formats, such as JSON, and protocols, such as HTTP, AMQP, MQTT and Kafka.
Contributors to the project include AWS, Google, Microsoft, IBM, SAP, Red Hat, VMware and others.
CloudEvents v1.0 has already been implemented in projects like Knative’s Eventing framework, Red Hat’s EventFlow, Eclipse Vert.x and Debezium, SAP’s Kyma, Servelss.com’s Event Gateway, and others. Soon after the spec was created, Microsoft announced support for CloudEvents natively for all events in Azure, via Azure Event Grid.
For event providers, such as GitHub or GitLab, CloudEvents could provide greater interoperability with just a couple of HTTP headers to their existing messages, Davis said.
For event consumers, a developer could request, say, all events from GitHub, but only push events and the infrastructure wouldn’t have to know GitHub events versus GitLab events because of the generic syntax.
Davis calls Knative the ultimate use case for Cloud Events.
“Regardless of how the event gets delivered into the system — meaning Kafka, HTTP or whatever transport — as long as they can convert to a CrowdEvents — the routing infrastructure, the subscription mechanism, those sequence operations that have filtering or the parallel ones doing filtering or any routing at all — all that stuff works on CloudEvents attributes … they can write all that infrastructure without knowing a single thing about the event coming into the system and get all that orchestration done,” he said, where previously it required specialized code for each addition.
The CloudEvents community will be hashing out its next steps at KubeCon, he said. Some ideas floating around are encryption, a workflow document on how to orchestrate functions, and a subscription API, where you could request events from a particular cloud provider.
The Cloud Native Computing Foundation, KubeCon + CloudNativeCon NA and Red Hat are sponsors of The New Stack.
Feature image via Pixabay