Analysis / Events / Technology / Top Stories /

Event-Driven Architecture Is the Wave of the Future

11 Sep 2017 9:00am, by

Event-Driven Architecture (EDA) is the latest step in the evolution of microservices and serverless technologies. The recent Emit Conference on Event-Driven Architecture held in San Francisco provided a place for cutting-edge geeks to learn about this digital business tool.

We’ve covered the emerging serverless architecture. But EDA is a different paradigm. Event driven architecture is poised to become a priority for mainstream business, but it’s still on the edge of most IT organizations, for those who even have it, according to Anne Thomas, Vice President and Distinguished Analyst at Gartner, Inc., during one Emit panel. EDA has actually been around for a long time. Products like Tuxedo are event-driven. But, she said, it’s a tiny slice of what’s out there.

Gartner calls EDA a design paradigm. At its highest level, a software component, or event, executes a response to receiving one or more event notifications. A key feature is that the server component that sends the notification does not know which component is on the receiving end.

That, according to Cornelia Davis, Senior Director of Technology at Pivotal Software and member of the Cloud Native Computing Foundation, requires us to rethink thinking. Davis’ presentation of the same name, explained in technical detail how EDA is conceptually different than client/server architecture most commonly used today. At some point, she said, it requires the programmer to take a leap of faith as events are triggered.

In an effort to understand the shift, after her talk, I asked Davis if the shift to EDA is similar to the shift that had to happen when Java first hit the scene and programmers had to learn to think in object-oriented programming. Is this the same idea, only for the stack?

Not quite, she replied.

Object-oriented programming changed the way you thought about code, she said, but EDA is much more than that. Java created a new model of encapsulation, by creating objects and the methods on them, but it didn’t fundamentally change the imperative model underlying the code.

A piece of EDA by Cornelia Davis at the Emit Conference

But the code was still processed sequentially, said Davis. It’s been what is going to happen first, and then next, and then next. “It was still ‘start here and end here.’

With EDA, we’re talking about something fundamentally different.” In her talk, she explained that EDA requires the coder to take a leap of faith. That’s something that that’s not done.

Indeed, traditional programming excludes any leaps of faith. In fact, if you depend on a leap of faith, you’re doing it wrong.

Actually, what you’ve created, said Rob Gruhl, Senior Manager of Serverless Platform Team at Nordstrom, who gave a presentation earlier that day, is a “non-intuitive certainty.” It’s not actually a leap of faith because there are structure and coding around every event. It feels like a leap of faith because it’s so unlike anything developers have previously experienced.

“It feels weird, it’s too easy,” Gruhl said.

Emit Panel on the Future of EDA. Austen Collins, Jason Polites, Chris Anderson and Anne Thomas.

He broke down the progression: Serverless frees you up to think processes. EDA decouples the processes from the stack so now your arch is more flexible. In the future, he suspects that conflict-free replicated data types (CRDTs) will maybe play the same role for connectivity, make connectivity more optional and its no longer thinking about network connectivity at all.

The Current State

The EDAs in the marketplace today are driven by very simple applications that don’t have a complex number of events or event streams, according to the panel that included Gartner’s Thomas; Jason Polites, who is the Product Manager for Google Cloud Functions; and Chris Anderson, Microsoft senior program manager for Azure Functions.

Polites said the use cases Google sees for EDA are focusing on Internet of Things and light Extract, Transform and Load jobs. Also, academia has been moving towards adoption because they process a huge amount of compute time to analyze massive amounts of data, but it’s not consistent.

Thomas said they see it mostly for web and mobile back-ends. She sees it having great potential for artificial intelligence and machine learning for analytics. “The challenge,” she said, “is that there are no tools yet to enable people to grok the massive event inter-dependent complexities.”

What is needed, concluded the panel, are frameworks, which Thomas called “really primitive right now.” As we start re-thinking the way we think and start thinking in terms of events triggering events and functional models, she said, the systems may become easier.

The Future

So how do we move forward with using EDA to solve more complex challenges?

The infrastructure is really primitive, said Thomas. Moving forward, frameworks will be fundamental. “There’s so missing with frameworks, but also observability.”

“But,” she said, “the biggest challenge is getting people to understand how big EDA is.” The complexity is enormous and not fully understood.

Anderson said they’re already seeing problems today. The industry needs to start thinking about how these various things work together as it gets more complex. “We’re solving different parts of the problem, but we’re doing it independently,” he said.

He suggested spending time thinking about how to coordinate with the cloud vendors with the tools we already have. “We don’t need a fourth way of deploying,” he said. “We have lots of ways to build apps, but how can we coordinate better and get better output. We don’t want to solve the same problem twice.”

Where’s a Dev to Start?

Take each problem step by step, suggested Anderson. “There’s a lot more good content out there on how to start, a lot more answers than there were a year ago.” He suggested getting started with Martin Fowler’s blog.

Thomas was also practical. Focus on the data, she said. “It’s not just about your code. It’s about data and how you partition it, who gets to aggregate it.” And event schemas are critical. If you don’t have a decent large-scale perspective of how this data is coming together, she said you’re going to run into a lot of problems.

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image: Gartner’s Anne Thomas at Emit. All photos by T.C. Currie.


A digest of the week’s most important stories & analyses.

View / Add Comments