Akka Java Middleware: What Goes Inside the Containers Counts
Back when Jonas Bonér was forming the ideas behind The Reactive Manifesto, based on his work with distributed systems, the move to the cloud was still in its infancy and containers and Kubernetes yet to come. Yet he’s finding Akka, middleware he created that adds layers of abstraction between components, with new relevance in the Docker and Kubernetes world.
“The problem historically with Kubernetes and Docker is that people put old stuff, the traditional things, JEE-style of things into containers. Then they put all the old habits, old methods of communication, synchronous protocols, the bad things for managing state, into containers. Then you can only go half way. You have a great model for orchestrating the containers, but the things that run inside them are really sub-optimal. They’re not made for this new world of the cloud,” Bonér said in an interview.
The Reactive Manifesto, which was released publicly five years ago, called for systems to be responsive, resilient, elastic and message-driven.
While Kubernetes and Docker try to solve problems like a failure from the infrastructure level, Akka addresses those problems from a programming model perspective — how you build your business logic, how you build workflows. LinkedIn, Verizon, Capital One and Credit Karma all use Akka.
“The essence of applications is how things communicate, how things relate to each other, how information flows between different parts,” Bonér said. “With Akka, we’re trying to build the best programming model for the cloud. It’s cloud-native in the truest sense. It was built to run natively in the cloud before the term ‘cloud native’ was coined.”
Akka a Scala-based framework running on standard JVMs. It’s a set of open-source libraries for designing scalable, resilient systems that span processor cores and networks. It’s the most well-known implementation of the actor model, which uses lightweight, isolated “actors” that communicate via asynchronous messaging. It’s focused on implementing business logic instead of writing low-level code to provide reliable behavior, fault tolerance and high performance.
It’s backed by Bonér’s company, Lightbend, parent of the Scala language and the Lagom microservices and Play web application frameworks. Akka work on Java as well. It has more than 250 projects on GitHub, as well as a port to .NET. However, it comes with the steep learning curve of Scala, according to RedMonk analyst James Governor. He noted an uptick in Scala in his most recent programming language rankings, after declines the three previous quarters.
Akka has been growing slowly. However, today it has around 5 million downloads a month, compared with 500,000 downloads a month two years ago, Bonér said.
“Akka works really well [on] how you build your business logic, how you orchestrate workflows on a more fine-grained level,” Bonér said. “It’s what you put inside the Docker containers. It’s the thing that drives the application logic, the communication patterns. It’s what you put in the box that you orchestrate with Kubernetes. So I think these two different ways of looking at the world complement each other.”
Bonér spoke with The New Stack previously about how reactive programming addresses scale-out issues.
A lot of focus now is on microservices, and there are a lot of frameworks that make it very easy to create a single microservice, he pointed out, but one microservice is not that useful.
“Microservices are only useful when they can collaborate. … As soon as they start collaborating, we need to coordinate across a distributed system. That’s where all the hard things come in — managing communication, managing state. … then you have to stitch everything together yourself, while AKKA was built for that from the start, building a fabric for distributed systems from the start.”
Akka embraces a “let it fail” attitude — that losing one actor shouldn’t matter because another will pick up the work — and that actor might even be on another machine. Bonér explained how that works in a containerized world:
One of the key aspects of the Akka model is the notion of location transparency. Each component has a virtual address.
“All you need to do is communicate with this proxy, this virtual address. All communications are then relayed to wherever that actor happens to be. It might be on the same machine, but for scalability reasons, it might have been moved to another machine. As a user, you might never even know that.
“Wherever that actor happens to be is not important to you. That means the system can take a lot of liberty in scaling the system out or up and down, depending on the needs or depending on the load of the application. You can take advantage of that also when it comes to replication or redundancy, but it might spin up three, four instances of it. If one fails, it just points over to a replica — and all that without you even having to know about it,” he said.
As a layer over Kubernetes, all this can happen by the runtime itself without having the admin intervene, he said. If the whole node goes down, however, you need to take a more coarse-grained approach to failure management.
“That can be a more intensive process. It’s best to let the distribution fabric manage it — in this case Akka. Let the runtime manage failure for you. I think this fine-grained management of failure and the coarse-grained management of failure in Kubernetes complement each other.”
And Leo Cheung, principal at Genuine.com, maintains that Akka and the actor model are ideal for IoT applications, noting: “The ‘minimalist’ requirements align well with the actor model in which breaking down business logic into minimal tasks for individual actors to handle is part of the model’s underlying principle.”