Part two of an interview with Alex Polvi, co-founder of CoreOS, which incorporates containerization, clustering and fault tolerance into its auto-updating technology. The company calls its offering “OS as a service.”
“Everything’s at the table again,” as a consequence of the current speed of solutions across the industry, says Alex in part one of the interview. “We’re re-doing networking; we’re re-doing storage; we’re re-doing our databases. We’re fixing it all…”
Alex Williams: The re-doing of networking, the re-doing of storage… How is that starting to emerge?
Alex Polvi: They’re simplifying and they’re becoming more accessible to everybody. For instance, for a company to run a distributed storage device right now, it’s a pretty complex thing. Containers represent an enough of a movement that we’ll see that become more accessible to the common Ops shop. These things will be easier.
Clustering is another one. Clustering is a complex beast that’s been around for a while, and only the most advanced operations jobs possible could do it. I think we’re all going to run little clusters in our environments, whether it’s three servers or 30,000. That’ll be more accessible. I think accessibility is the big thing. It’s making all of these concepts just more accessible to companies.
It goes back to the roots of how Linux has evolved over the past several years. Chris Swan wrote this story in The New Stack about how the concept of Linux distributions is changing. He talked a lot about systemd and its evolution.
He makes a point that your announcement of Rocket was perceived by many to be a direct challenge to Docker, particularly coming on the eve of DockerCon Europe. He wasn’t writing about the politics of ecosystems and VC-funded startups; he was looking at systemd, which he sees lying at the heart of the technical arguments. He states, “There’s been an unholy war raging through the Linux world over systemd for some time. Pretty much everything on a system gets touched by what is selected as the first process on a system and how that impacts everything getting started up.”
Do you agree with that, do you disagree, or do you have a different take on the entire technical basis of the argument?
Alex: I probably need to read the article to know which part of the argument he’s trying to make. My opinion on systemd is: who cares? Obviously some people care; I don’t get it. The reason systemd exists is to boot a Linux server; there’s a set of things you need to do to boot it. The systemd view of the world is, “let’s just provide a good version of those things, for folks to boot.”
It’s actually very similar to Docker and the “batteries included, but removable” type of mentality. It’s a product mindset.
“Batteries included, but removable,” is when you’re thinking about things in terms of a product. The UNIX philosophy says, “think about things in terms of components” — as tools you use to build a product, but not as a product in and of itself.
It’s just a clearer sign of Docker being a product, instead of a component that we all use to build our products with.
That’s the clear delineation. Could some of the things within Docker be broken out, such that they could be used in other people’s products? Sure, but Docker itself is fundamentally a product which just has different design characteristics on it, instead of a set of UNIX philosophy tools.
By the way, I think both need to exist. You need to have products, which are for people and companies to use — a complete solution to something. But, you also need all of the individual components to exist.
I’ll refer back to Chris’ story. He doesn’t say the two need to co-exist, but he says that CoreOS, Red Hat’s Project Atomic, and Ubuntu’s Snappy Core, are all positioned as distributions to run Docker on top of. Do you agree with that? Is that part of what you’re saying here, that the two need to coexist? Does that leave Rocket in a different place than Docker?
Alex: (now reading the story, chuckles) I was just reading some of this for the first time. So, there’s a couple of things going on — which part should we dig into? I think it all just depends which angle are you looking at this from. You need different tools for different jobs. Docker, Rocket and systemd all need to exist because they all have different purposes. So where should we start tackling that from?
Just today, Lennart Poettering posted an article about systemd supporting Docker images natively — systemd can now run a Docker image. That’s very timely for this part of the discussion — systemd has all the capabilities Docker has, except for this easy image pull-push thing which they just added. Now systemd and Docker look very similar, which confuses the mess even more.
I don’t want to get too deep in the weeds on systemd, but it speaks to some of the confusion in the market right now.
Alex: The only people who should care about systemd — or not — are people who are building Linux distributions. If you’re using a Linux distribution, it’s because you like the way they set things up. If they use systemd, and you like systemd, then you should use that one. If you like something else, you should use the Linux distribution that chose something else. However, it turned out that all the builders of Linux distributions pretty much unanimously agreed that systemd was the way to do it. Maybe the people in charge of the Linux distros aren’t serving their customers well anymore. Or, maybe systemd is actually okay and we’re just talking about a religious argument now, and not an technical one.
There are very passionate arguments for Docker, but you stood back and looked at it from another perspective. I’m curious what that perspective was when you saw Docker’s emergence.
Alex: The system we want to build utilizes containers. The reason we started CoreOS is because we wanted to fundamentally improve the security of the Internet, and we do that with updates. We think the best way to do an update to an OS is to automatically apply it for the customer. We think we can centrally manage the updates to a server much more effectively than individual IT teams can.
How do you build an OS that updates itself? You package your applications inside of a container; you do that so that you can separate the application dependencies from the host dependencies. That allows us to update you without breaking your applications. That’s the technical requirement for why we need a container at the heart of our whole story.
As we build this next generation OS, we don’t build things that already exist and do the job well. One of the tools we selected early on was Docker. As we were building CoreOS, Docker emerged right when we needed it. Docker built a tool for downloading and running a container, and that’s what we needed.
We sat in on Docker early on, contributing back to that community as it emerged. As the story unfolded, we used Docker like a package manager. It’s almost like RPM, or Debian’s Dotdeb, even Snappy. Snappy and Docker, when you think of Docker as a package manager, are competitive. As Docker evolved, it started adding things we didn’t need, and it wasn’t fixing the problems with the original package management that we needed.
So, our package manager wasn’t working that well, because they not only skipped a bunch of things which, in our opinion, they should have built, but also they added features that we didn’t need anymore. That triggered us to release something, with the features that we needed done right, and without the things that we didn’t need. We essentially built the ideal thing that we wanted.
One important characteristic of a container, and one of the things we rely on for our value proposition, is that it’s interoperable with other Linux distributions. I want for a Rail customer to choose us on merit, not because they built their application for CoreOS and it only runs on CoreOS. A beautiful thing for interoperability is containers because they run mostly the same, whether on Rail or on CoreOS.
We think that standards need to exist around containers — around how the image is defined, and how that container is run — so that different vendors are free to build them — the best tool for the job. Outside of getting the security aspects right, another piece that we solved with Rocket was standardization.
Our motivations are those of somebody building a tool that uses a container, not those of a company that’s trying to build a web service with a container platform.
We happen to have a container platform for running some of the random infrastructure we use, but that’s just an implementation detail of ours as a company building a product. Our product needs a UNIXy, composable version of a container. We don’t need a vSphere handed to us.
Are there concerns about fragmentation, due to these competing interests, which might undermine that interoperability?
Alex: There are no technical reasons that the Docker container format and the Rocket container format can’t converge. We are 100% in support of them converging. We want the standard to be well-engineered and well-designed. We’re not going to sit around and just take the de facto if it’s broken. We will put out what we think is well-designed and has been reviewed by a number of folks outside of ourselves who also say it’s well-designed. We are happy to collaborate with Docker on that effort to provide interoperability. But we’re not going to cede it to Docker just because it’s there. We want it to be good — to be a well-defined, solid, technically well-built implementation.
While you give the benefit of the doubt to an early open source project — that it will get straight over time — at a certain point you give up and you build the thing you want instead. We waited about a year and a half. Maybe we should have waited two years; maybe we should have waited four years; we thought we gave it sufficient time before investing in our own set of tools.
Tell us more about Rocket — what have you learned since its launch?
Alex: I think the first thing we learned is: in any situation, you need to manage press. We thought we were doing folks a favor by not blasting, pre-briefing, and getting a lot of press involved up front. I mean, even yourself — you’re the sort of person we would normally approach with these things before we actually announce them. As you could probably vouch, we didn’t talk to you about it before we announced it. That was because we didn’t anticipate it blowing up as big as it did. Our lesson learned on that front was: if the message gets away from us — like it did with people starting to blast “fundamentally flawed” across the Internet — that can result in a discussion that isn’t helpful for anybody. That was one big lesson learned.
I don’t think we screwed up at all in terms of what we actually delivered or how we messaged it on our blog. I think maybe we could have softened the words, but nobody argued the technical argument of what we made. We have a very specific technical argument that holds true, if you actually read the details and understand it.
I actually read the details — I can’t say I necessarily understood it all.
Alex: (Chuckles) Right.
I was looking forward to see, “is this thought through?” And it is well thought through.
Alex: At the end of the day, the technical argument holds. What’s happened is: one, Rocket has gotten a life of its own. It’s a healthy open source project — lots of forks, lots of outside contributors, growing very quickly — and that’s good. We’ve also started the discussion around standardization, which again I think everybody wants. Everybody wants the container to be interoperable.
We’ve also triggered some re-ordering of priorities within the Docker project.
This was a very heavy-handed way to do it, but it happened, and it’s good for the customer for this competition to exist.
It reminds me very much of Firefox and Chrome. I was at Mozilla when Chrome released, and it was like, “What the heck, guys?” One, Google was paying for Mozilla, so there was a business relationship. Mozilla Firefox was very successful. Firefox was the golden child, rooted in a good ethos, “let’s take back the Web,” with very sharp, very good developers. But it kind of lost its way. Chrome came out and did all the things that everybody knew should have been done but, for whatever reason, hadn’t been done by Mozilla.
The net result of this is: the Web got a lot faster, web standards got more strict and better for the user, and both Firefox and Chrome exist in a world where it’s okay for them both to be there — the best tool for the job. That’s what I think will happen with this over time: we will continue to invest in Rocket and make Rocket exactly what we want it to be, and if there’s a place where Docker looks exactly like Rocket, I still think it’s good for them both to exist, just because it’s better for the end user if there’s choice.
Is there compatibility there? How are you going to help the user understand that there’s compatibility and interoperability with both? What do you say to the people who are running Docker and who may be interested in and considering Rocket?
Alex: What you’re saying to Docker is, “support an open standard that’s defined by a community, not just a vendor.” Using our web browser analogy, it’s very much like Netscape. Netscape didn’t originally create the first HTTP protocol, but they were the first to implement it in a way that was end-user-friendly — like we were talking about with containers. If we had called it “the Netscape Web” or “the Netscape transfer protocol” it would just be a little bit different Internet right now. Instead, HTTP is de-coupled from the web browser itself, in this analogy, and everybody is on fair ground to build the best tool that interoperates with it. That’s what we want.
We want standardization, and then interoperable tools. We will continue to lobby for that until it happens, I guess. Our standard is out there. It can be adopted. It’s out there; it’s happening now. Is it adopted by Docker? No. Could it be? For sure. There are no reasons except political ones for it not to be.
So what are the next steps for Rocket?
Alex: Make it awesome (laughs). We’re designing it for the production-ready use case and solving all the production-ready problems that Docker has. We think that companies — serious infrastructure companies that take care about security and reliability — will choose Rocket, given the current state of the art. We will continue to push down that path and focus on the production-minded operations person.
What do you have to sacrifice with that security that you’re building in?
Alex: There’s always a compromise between security and ease-of-use. We’re trying to mitigate that as much as we possibly can. But, yeah, there are some things. Probably the biggest thing we can do to mitigate it is to set up the security for you when we implement these tools inside of CoreOS itself. But, when you’re using these in assembly mode — where you’re using the components, instead of our full end-to-end solution, which is CoreOS — then you have to be more security-minded about how you set it up.
The way we’re balancing it is by providing some end-to-end solutions that take care of all the security-mindedness for you and then giving you the components, while making the security in the components as easy to deal with as possible through good developer UI.
What are your observations about the way Docker is approaching the security issue? You cite them for developing products; do you see security in Docker becoming a product, or do you see it becoming part of Docker overall?
Alex: I think, and I hope, that Rocket has opened up the conversation enough that these things will get fixed. Again, the line in the blog post was, “the architecture, from a security and composability perspective, running through a monolithic binary, is fundamentally flawed.” That was the line. That gets fixed by Docker de-composing it into a bunch of individual tools — that’s how you fix that security and composability issue.
I hope the net result of all this is that Docker does it, because if they don’t, it is fundamentally flawed.
Fortunately, Solomon (Hykes, creator of the Docker project) is an open-minded guy. He’s on Docker dev right now, talking, having the discussion, and being open about breaking it into a bunch of different tools, because we recognize that this is broken.
That puts you right in the middle of that conversation?
So: 2015. We haven’t done any predictions; I don’t think we really will, except to say that our focus will be on this idea of distributed application development, and how to manage applications across distributed infrastructure and cloud services and multiple data centers. But, that’s a very high level perspective. I’m curious about what you see in 2015 — do you have any particular expectations of the market, what you will be focusing on, and what the community of like-minded people will be focusing on as well?
Alex: I think 2015 will be the year of the production-ready container.
Yeah, I saw your tweet on that.
Alex: I sincerely think that it — among other things, outside of our crazy, little world of infrastructure — will finally get cleaned up and ready. I thought 2014 was going to be the year of the production-ready container, but it’s taken a little bit longer than we expected. Now I think the fire is lit and it’s going.
How does that change the composition of cloud services?
Alex: Cloud services like AWS?
…and Google …if we can move complete containers around pretty much anywhere?
Alex: Well it’s already a race to the bottom. They have a lot of things going for them in terms of product differentiation right now, they’re not one-for-one. That world of perfect workload mobility between the two clouds is still a ways out. That’s years out, oh my gosh… Containers in each cloud, and you can just move everything around, and you’re just paying for the cheapest, fastest provider? That’s still a ways out, but that is in sight. There’s no question that could happen.
First, the cloud service providers are probably all using containers behind the scenes. I know Rackspace uses some containers. I know Google uses containers; they talk about it a lot. I don’t know what Amazon uses, but they all use containers to build their thing. But in terms of an end user embracing a container, it’s a ways out before my container runs equally well on Amazon or Google, so that I could just switch between them seamlessly.
We’ll be looking to people such as yourself to help us understand these concepts. I look forward to keeping in touch.
Alex: Likewise — and call me on it at the end of the year. Let’s have this discussion again and let’s see if we got a production-ready container.
Sounds good, Alex. Thank you very much for taking some time to talk to us. We covered a lot of ground.