WSO2 CEO Tyler Jewell: Ballerina and the End of Middleware
It’s been a busy week for Tyler Jewell, the CEO of enterprise application (EA) integration provider WSO2. This is the week for WSO2’s user conference in San Francisco. WSO2 is a leading provider of open source integration, API management, and identity and access management software, used for messaging, connecting, and governing applications. But with containerization and Kubernetes, the industry appears to be moving towards cloud-native computing, which requires a new set of nimbler tools. That the industry is moving towards cloud-native computing is an “inevitability,” WSO2 founder Sanjiva Weerawarana told us.
While supporting an existing and still growing customer base, WSO2 must also prepare for this new wave of computing. That’s what this week was all about at WSO2Con: The company released a new serverless platform, based on IBM OpenWhisk, as well as a set of nimble EA tools, including a microESB (enterprise service bus) and micro-API Gateway, to work as containerized microservices. And thinking for the long-term, the company has launched a programming language, called Ballerina, that will ultimately make it easier for developers to connect their applications with other services, eliminating, or at least minimizing, the need for middleware such as ESBs, so the company asserts.
Good thing, Jewell is no stranger to disruption. He took over as CEO for WSO2 last September, after selling his prior company, Codenvy to Red Hat. Codenvy was the first company to bring a fully-functional collaborative cloud-based integrated development environment to market. And Jewell actually got his start two decades ago at BEA, a major middleware and Java vendor still widely deployed around the world.
We caught up with Jewell at WSO2Con this week to hear his thinking about Ballerina, a new “cell-based” architecture for cloud-native computing, and what the future holds for WSO2 and its clients.
You started Codenvy right after leaving Oracle. What was the opportunity there at that time?
So prior to Oracle, I’d spent six and a half years at Quest Software.
And Quest Software, which was eventually required by Dell and is now Quest Software again, had a big portfolio of systems management products. I managed their systems management products for Java and .NET . About halfway through my work experience there, I got the opportunity to work in corporate development leading venture investments. We developed an investment thesis specific to middleware and DevOps, and even though it wasn’t really a focus area for Quest, the founder of Quest, Vinny Smith, was a huge fan and encouraged it.
And so we ended up doing a bunch of different investments, and one of them was WSO2. We did an investment at Quest in 2011. The company at that point in time had about $2 million in sales and maybe 50 employees.
I’d gone off to Oracle, and Oracle wasn’t a very enjoyable environment. I got in touch with the founder of eXo Platform, which is a French company, to discuss our common interests around cloud IDEs. We had tried and failed to make a cloud IDE investment at Quest. I couldn’t let go of the idea that cloud IDEs would be powerful and disruptive.
You’d think that that would be a natural thing, but up until that time no one really did it.
In 2011, 2012, no one had done it for the enterprise. And it seemed like somebody was bound to do it. With eXo Platform, we both felt “Hey, this market seems like it’s too good to be true. Let’s do this.” And so started Codenvy in partnership with them. And that was the middle 2012.
About that same time, Quest was getting acquired by Dell. And then Vinny calls up one day. He was starting a venture capital company and asked if I would like to get involved. I’m like “Well, Vinny, that sounds really interesting, but I’ve just decided to start this company Codenvy. We’re really excited about it. We’re gonna go build this Cloud IDE.” And he says, “Great. Come on board as a partner. Manage our dev ops investments, and we’ll make Codenvy one of our investments as well.”
And so sure enough, he launches Toba Capital, buys back Quest’s investment portfolio from Dell which included WSO2. After the buy back, we pursued a variety of DevOps companies, like Sauce Labs, and got deeply involved in DevOps opportunities. I was involved with four or five different boards with some amazing DevOps companies while we were building Codenvy from 2012 to 2017.
In June of last year, Red Hat bought Codenvy, and I had a short stint on the transition there. And while that was going on, the founders, [WSO2 chief technology officer] Paul Fremantle and [WSO2 Chief Architect] Sanjiva Weerawarana had done a great job of building up the business. They started nurturing the idea for Ballerina, and by the middle of last year, they had graduated it from experiment to project. And as their excitement grew, their desire to continue running the business diminished. We always had a great working relationship with our board work over the past seven years. Sanjiva just asked one day “You know, we really wanna go do this Ballerina thing. It could be explosive. And would you come run the company for us so we can make it happen together?”
And that was a year ago. I had always believed in WSO2 and was a Ballerina convert. That combined with the people, it was easy to say “Yeah, why not?”
What was it about WSO2 specifically that you liked? There was a lot of EA vendors out there at the time. But what was it specifically about this company?
Well, first of all, familiarity breeds affection. I never let go of the convinction tied to our original thesis for investing in WSO2 in 2010. If you play that calendar back, the predominant vendors were still WebLogic and WebSphere. Cloud native was not a concept. Service Oriented Architecture [SOA] was the predominant architecture. Open source was trending, but not common place.
Sanjiva and Paul founded the company because they were offended by the business practices of BEA and IBM. They felt that those business practices led to inappropriate product development. Their ideas was “The world needs a new middleware stack. Let’s use open source as a way to build a better one. We can make it faster. We can make it lighter. We can make it simpler. And we can build a business model that is more ethically clean.” We bought into that vision 8 years ago, and I still buy into it today.
There’s large legacy vendors who have unfair business practices selling a lot of middleware. It’s a $34 billion market. That’s a lot of money exchanging hands, and I’m not sure that money’s effectively being used. There’s a massive disruption opportunity there by modernizing and disrupting the old guard.
And that’s on the one hand. The second is that the people who run this company — there’s 550 now — have done such a phenomenal job of creating an environment that is politic free. It’s almost like a live human version of working in an open source community. People are really pleasant to work with. They care deeply about the work product. There is not a lot of ladder climbing as a result.
The company got its start in Sri Lanka, and maintains its engineering base there…
There’s deep, rich, and diverse cultural backgrounds that are not really all that influenced by North American styles. There’s a lot more friendliness that goes with it. They don’t bring what I consider some of the bad, aggressive practices that you see in North America.
And then Ballerina. Ballerina’s obviously a big deal.
By the time you signed on as CEO, middleware wasn’t being discussed as much in the press. But it sounds like the company had been growing pretty steadily in the last eight years. The workforce is ten times the size it was. That was basically the case. They just kept quietly growing.
Yes, WSO2 is a quiet juggernaut, and have for more than four years not only been growing, but at increasing rates of growth. This year will be 60% growth after last year’s 50%. This is faster than MuleSoft.
This is happening because integration is everywhere. You can’t avoid it. The press and industry popularize cloud native architectures, but that is an operations construct while integration is an app development construct. The cloud native specialists may not be aware of how substantial integration is for developers. These worlds are going to collide over time. Organizations that dwell on Kubernetes and service meshes view disruption from an infrastructure abstraction while integration is a developer programmatic abstraction.
We built Ballerina, because this integration abstraction is needed as an elastic runtime on top of cloud native systems. We are bringing our expertise around integration and applying it to the orchestration domain. We think that’s a big opportunity.
How did Ballerina come about? That’s the value of it compared to other languages.
So Ballerina came about because after doing a ton of integration projects, if you look at how an integration project is traditionally done, you do it with an ESB. And an ESB basically is not an agile construct. You have to deploy the ESB. You have to get a bunch of adapters to connect to different systems. They each have their own life cycle. The logic of actually doing the integration is written by a developer who has to deploy it inside the ESB. And then if you have those two endpoints, if those two endpoints are changing, this whole system breaks.
And developers don’t like it.
Developers don’t like it. There’s no flow. So the founders of Ballerina felt that the needs that drove ESB adoption – that it’s the glue between endpoints – was going to become a bigger problem, and so there would be a significant demand for solutions to integration problems
You could use a general-purpose programming language with all sorts of frameworks and abstractions on top of that which require a very long learning curve. So even in spite of the popularity of Spring Boot, there is a large learning curve. You have to be a Java expert. You have to be a Maven expert. There’s a full series of dependency injection abstractions. Layer and layer and layer and layer.
The Ballerina founders said, “Look, there’s no need for all that. Let’s take a step back and look at the integration domain. List out all the use cases that we need to describe, and then let’s create a simple syntax that represents those use cases.”
When you do that, you can cut out unnecessary elements in code and middleware. You cut out the need for the center of excellence to manage it. You cut out the need for complicated adapters that have their own lifecycle. You’ve moved everything into code. For developers who just write code, they can stay in their flow longer and become more productive.
Over the past 20 years, if you’ve followed SOA, the premise was config over code, config over code. You deploy the ESB, and then use XML and YAML to configure stuff. Turns out that’s not easily debuggable or maintainable. No one likes XML and YAML.
The reason we have these long release cycles still is because config over code, we’ve hit the upper limit of what it can offer us. And so we thought that 20 years ago that code-over-config was a failed approach. But that’s because the code hadn’t evolved. The code was still too complicated. And now we’re finding that code over config is fundamentally a better way. It’s more scalable. It’s more maintainable. You can write unit tests against it. You can make the developer more productive. You can be more agile. Everything that we thought config over code was going be has hit a ceiling.
One thing you talked about in your keynote this week has been a new architecture for cloud-native computing, called the Cell-based architecture. Could you speak a bit more about that?
So traditional large scale integration projects are either a layered architecture or a segmented architecture. And layered and segmented architectures focus on having common layers of technology that your data flow through.
These approaches are scalable. They’re very easy to plan, and they offer centralized governance. But they become a deployment layer that your app developers must work through. Even though your app developers may be agile, their products are not agile because they have this centralized architectural dependency.
A cell-based architecture is a composable unit of architecture that is self-contained. It has a data plane, control plane, a collection of microservices, and DevOps processes that allow it to independently be improved and deployed apart from any other components or cells also deployed within the same environment. Development teams can own their cell through its entire lifecycle. They can continue to iterate and improve that cell while other cells are doing the same thing. And the entire system stays alive irrespective of changes made to different cells. The cell is independently scalable. It’s independently deployable. It’s independently governed. It is part of an ecosystem cells.
When you do that, a development team is freed from all the dependencies and burdens that the organization may have imposed upon it. And it can move the advancement of that cell at its own pace independent of what the other cells are doing.
How do you determine what the lines of demarcation are around the cell? Who does that, and how do they determine it?
All the microservices built within it need to be governed by a series of exposed APIs. So the cell has its own gateway, the cell gateway. So that restricts everything that’s coming in on the ingress side of that. We call that a data plane. On the outbound, the microservices inside may need to communicate with other cells, and they can either ask for discovery from the cell gateway or if they’re aware of it, they can make a direct connection as well through some sort of egress also through the data plane.
There’s a regulated data plane outflow. And then every cell gets its own control plane, which is a way for a governance or an administrator to come along and observe what is happening within that cell. And if every cell has its own control plane, you can then essentially have a federated control plane that’s looking out across all the cells so that you can provide centralized governance and command and control capability from a common location. So you can change throttling policies. You might change the visibility of certain APIs. You might tell some cells to behave differently or scale some up or scale some down still in a completely centralized way.
The developer team doesn’t care about that. They just care about the iterative approach of building and deploying their selves. So if we ever want to get to the vision that Netflix and Amazon is talking about where you have these autonomous development teams, self-directed, that own their own technology, completely independent of others, a cell-based architecture is a common pattern that any enterprise can implement to get that same sort of behavior set on that.
And I think that a lot of the predominant discussion out in the marketplace is around microservices. Provide a microservice, and then you get the reuse on that. And the problem is microservice is not sufficient. You can have a logic. You can have an API there, but a service is not really a service until it can be governed. And in order for a service to be governed, it needs to have some sort of API policy enforcement so that you can control who is coming in and out. You might need to control a performance in and out.
There’s probably gonna be some sort of security token that you have to apply, so you need to be able to control and ensure that you’re getting exactly who is able to talk to that service. You need to be able to monitor all the services that are talking to each other and know exactly what they said or how they said it. And without these sorts of layers of governance control, a microservice is pointless. Because unless the enterprise can control it, it’s just some code that’s running on a server.
What is a “micro-ESB” for a container?
So a lot of ESBs, they tend to have been designed for large-scale. ESB deployments have traditionally intended to be clustered. It’s designed to potentially handle large scales of data, massive data transformations. And so in a classic architecture, you tend to run this in a JVM cluster. JVM clusters tend to have long boot cycles because the system is preparing itself to operate in a way that it will never come down again.
The system is stationary to maximize the uptime.
In a cloud-native world, ESBs do not need the same overhead. By booting the ESB in containers you can have an orchestrator monitor your containers to launch new ones whenever you need extra ESB capacity. You start to care more about memory footprint and boot speed. So a micro-ESB lets you run with thse boot and memory properties inside of a container. It has millisecond boot patterns associated with it. It cannot handle as much traffic. An individual micro-ESB does not have as good performance as a macro ESB, but it’s fast boot and orchestration give it higher elasticity.
The company also launched a new serverless offering this week…
We’ve been fascinated with the interest that developers show in being able to just write some function code and deploy it and suddenly getting an app. There’s definitely a curiosity in the enterprise segment about how would you apply that to enterprise integration. And so if you were to design a serverless solution from scratch, if you wanted to run it privately as your own with your own resources, you would be using a lot of event-driven architecture behind the scenes.
A lot of the products we already provide for API management, streaming analytics, service buses, and message brokers are foundational to creating a private function as a service capability. So we partnered with the founders of Apache OpenWhisk to design an enterprise serverless solution. Think of it as a function router. It runs on Kubernetes. Developers get a private environment that they can apply their functions. And then they get their own private scalability algorithms for running those functions.
So we put it out the first time this week, and we’re interested to see how our enterprise customers react. It’s a very different serverless approach than a Lambda or Azure Functions would provide.h
WSO2 is a sponsor of The New Stack.