Back when we first started saying our applications were running “on the cloud” rather than “on a host,” the cloud platform was typically a language interpreter with network access. These first platforms truly embraced cloud dynamics once they could do something that hosts alone could not: scale up.
In 2003, a developer and part-time Le Mans racer named David Heinemeier Hansson extracted the language he had adapted to run his company’s Web app, called Basecamp, into a product of its own. Ruby on Rails was an adaptation designed to run apps through the Web. Three years later, a group of veteran executives, including one who led a little-known firm called Salesforce, launched an online platform for hosting Ruby on Rails applications. Its name, Engine Yard, and its bright red train logo told developers everything they needed to know at the time about its purpose.
A few months later, after acquiring funding from Benchmark Capital, Engine Yard acquired a firm called Orchestra, puzzling industry observers who wondered why a Ruby on Rails firm would acquire a PHP platform. What next? Would PaaS services become capable of running three languages? Four? Six? Who would ever want that many?
This year, cloud technology has rendered moot the entire question of how many languages a PaaS should support. If a PaaS supports containers, then it can run whatever language the customer puts in the container.
Last week, Engine Yard’s new CEO John R. “Beau” Vrolyk took a huge step toward abandoning all pretense of language restriction, acquiring the maker of the Deis orchestration platform. In so doing, in an interview with The New Stack, he called the establishment of a service around any single language — the founding principle of the company he now leads — in light of recent evolutionary changes to the industry, “unwise.”
Yet in extended excerpts from that same interview, Beau Vrolyk goes even further, suggesting the concentration of any cloud-based service around a single mechanism for distributing functionality — even Docker — may be, in his view, equally unwise. I asked Vrolyk if the cloud platform industry as a whole has reached an inflection point, where all of us will need to be language-agnostic, or “polyglot,” in order for us to truly make use of Docker-based deployments.
Beau Vrolyk, CEO, Engine Yard: I think a better way to look at it is: Distributed systems with microservices allows you to use whichever language is most appropriate for each part of a complex application.
For example, I’m old enough and have been in this industry long enough to have actually written FORTRAN compilers. And I’ll tell you, I still actually prefer FORTRAN for certain analytic tasks in which they’re very numerically intensive. The semantics is better, and I get better code out of it. Nobody in his right mind would run FORTRAN in a Web application today, but when people are trying to scale extremely large, numerically intensive applications, I see all sorts of code written in strange ways, to try and make up for the complexities forced by the semantics and the syntax of some of our more modern languages. When I know that the whole thing can be done in five lines of FORTRAN.
To your point, were one writing a big data application that was trying to do certain calculations, or were one trying to solve a problem in weather forecasting, having one of the microservices be written in something as archaic as FORTRAN might actually be the best choice. So what becomes best choice for language for an application does become based on the skill sets of the people implementing the application, on whatever standards they’re forced to comply with in the business in which they operate, and whatever really suits the task at hand.
In some ways, I do definitely agree that we’re approaching a point where being very polyglot, when it comes to languages, makes sense. I’d say that our industry often takes a lot longer to actually change than we on the leading edge believe that it’s going to take. So while this change may really happen over the next five to seven years, it’s certainly not going to happen in a year, and it’s probably going to happen by the end of the decade.
Scott Fulton, The New Stack: If we’re building platforms from here on out that enable people to use any language that can be containerized and put into this ubiquitous platform, then what’s our plan for integrating this new management tool that you’ve just acquired [OpDemand Deis] into what now becomes the old system, the old way of doing things? We’re already looking at the end-of-lifecycle for the language specific-platform. Does that migration take, as you say, five to seven years? Things I used to think would take five to seven years, in Docker time, end up being that many months.
Beau: I have a very healthy respect for how important it is to keep working applications working. One of the things I think people tend to overlook is, the decisions that one would make for a new application, where you’re starting in a greenfield space, have the freedom of being unencumbered by any legacy other than one’s predilections for any particular language. But being so unencumbered, you’re free to choose whatever language tends to make the most sense, and address the key technical difficulties you’re going to be facing in the offering of that application — and also, operating within the very real constraints of whatever company it is you’re writing this application for. Companies — especially large ones — quite rightfully have programming standards where they try to control the rate of innovation, to avoid going down too many dark alleys and bumping their noses at the end.
I think what’s really going to happen here is, for the foreseeable future, we’ll do plenty of applications which are in our traditional Ruby on Rails space, running on a curated stack of something like MongoDB or Postgres, with a specific version of Linux and the whole stack from bare iron or the VM, all the way up to the application. Some of that will be motivated by tradition and comfort on the part of the application developer, and their management structure. And some of it will be motivated by a very real need to do things to the various layers of the stack which are much more difficult to do if your application is captured inside a container.
What the container does, in many ways, is make the world that the application lives in simple and safe. That’s a good thing for most applications, but there are some applications that need to use the sharper tools of things which are a bit dangerous, where yes, you can make mistakes that cause systems to crash and whole environments to keel over. But the reason you need to do that is because you’re working on a problem that demands that level of efficacy, or that level of accuracy in what it is you’re trying to get at and control.
So some of the more complex database-related applications where performance is a higher priority than ease of programming, or situations where raw performance trumps the ability to deploy an application easily — those sorts of things are going to stay in a much more traditional, curated stack. And they’re going to stay there for a while, because the reason they’re in there is not because it wasn’t that the programmer didn’t think of making it easier to do by putting it in a container. It’s because the performance characteristics are such that they drive the writing and engineering decision to that technical level.
I think what we’ll see is quite a while where we have folks doing applications directly on a curated stack, where they need to be literally side-by-side with the database with minimum possible overhead between them. And then other folks doing a whole lot of other applications, which can very happily live within a container and aren’t impacted by the overhead that they necessarily suffer by being containerized.
There is no free lunch. The safety of the container system definitely comes with some kind of penalty and overhead during runtime. A lot of times, that doesn’t matter.
As to the buildout of the infrastructure for Docker-ized applications, I think part of the reason you’re seeing this entire environment evolve so quickly is because many of the pieces that, frankly, we’ve worked on for 10 or 15 years in other areas, are being applied to running microservices in Docker-ized situations. You’ll see schedulers like Mesos, which have a long history — long before anybody knew the word “Docker.” And that kind of scheduling can be applied right now, very directly, as can Kubernetes and others who are clearly applicable, but didn’t get written for containers. It just so happens they solve a very similar problem, and they have accelerated the adoption of containers way beyond what most people anticipated.
Scott: So you’re saying we’ve had maybe a decade-and-a-half of different pools of separate developments, all of which gained a kind of critical mass until they bled into one another, and suddenly there was this ocean rather than all these little ponds. Maybe people in the tech press didn’t pay attention to them at the time, because they were little ponds.
Beau: I think that’s a pretty good analogy. I wouldn’t use “ocean;” I’d use “big river,” because an ocean implies a certain level of placidness which I don’t see. I see a lot of chaos. The analogy I would use is, a bunch of streams pouring into a very large river, which is still quite tumultuous. There are plenty of rocks and bumps and rapids.
We have a long way to go before we have trivialized the current generation of applications.
Feature image via Flickr Creative Commons.