We founded Containership back in 2014 with this mission: Build a product that would simplify the confusing world of cloud automation and DevOps. Build a tool that was easy to install, easy to manage, easy to upgrade, and provided the most important features that the majority of users would require. Let businesses deploy containerized workloads quickly and easily to their provider of choice using a unified workflow. And let these businesses get back to building their products.
Back when our initial work began on the open source project, the world of cluster schedulers was in its infancy. The options were either extremely heavy and complex or were implemented in a way that made them cumbersome to use and fragile to the point of being unstable. Mesos was the only clear option at the time unless you wanted to take your chances with Flynn, Deis, or Fleet from CoreOS.
We also wanted to make sure to avoid going down the path of using systemd to manage service configurations (like Fleet did), and we wanted to avoid the complexity of a system like Mesos with its many different distributed systems all needing to maintain their own quorums, where upgrades would take you to the brink of a heart attack.
Since then, we have expanded significantly on the open source project by introducing our Containers-as-a-Service offering, Containership Cloud. We’ve added logging, metrics, firewalls, fully featured load balancing, and cross provider snapshot and restore. We’ve gone from having just a few major cloud providers, to supporting 14. Through all of these product iterations, we still managed to focus on and maintain simplicity as our goal.
We Read a Lot and Talked to Many People
For a time, the heat was on, as everyone and their mother were working on taking a stab at gaining traction in the container scheduler space. Rancher had cattle; CoreOS has Fleet which was also used by Deis; Mesosphere was pushing Mesos and its Marathon framework; and Docker acquired Tutum and was working to get its own scheduler, Swarm, into the hearts and minds of users. Then like a unicorn on a rocket ship, the Kubernetes project burst forth from Google, and everyone else was left in the dust.
Eventually the dust would settle, and what was left standing was a Kubernetes project that practically caused a simultaneous nerdgasm every time it was mentioned in DevOps circles. Google’s reputation for scalability, coupled with its experience growing and nurturing open source projects, led to what has become the fastest growing open source project, and in the minds of many the clear winner in the orchestration war.
Google’s relative state of openness compared to Docker, plus its greater focus on community — symbolized by ceding control of Kubernetes to the Cloud Native Computing Foundation — gave users confidence that they could build off of the base provided by Kubernetes without fear of it disappearing tomorrow and being hung out to dry.
Raising the Bar for “High-Level”
Kubernetes is at the top of the hype charts, and for good reason, but if you’re not ready to swim in the deep end and are just getting your feet wet with Containers, you could drown in options and configuration. Containership Cloud with Kubernetes makes getting started easy, and brings self-service to the table for development teams looking to do this thing called DevOps today.
We’ve added first-class support for Kubernetes as an underlying scheduler that is fully supported by Containership Cloud.
You see, Kubernetes on its own is very powerful, but it is rather low-level. You can pick it up and get an environment running on your laptop fairly easily these days, but running it in production still requires a lot of hard work, integration of required systems for load balancing, DNS, volume management, and a whole host of other things.
By integrating Kubernetes into our own CaaS platform, we are able to provide users with peace of mind. They’ll know they are running on the clear winner in orchestration. They already have confidence in Google’s ability to deliver a stable and scalable project, but now they get to access that power through our simpler interface, making management of services and working in a team environment easy.
Luckily, Containership isn’t just a user interface around the Kubernetes API. We bring to the table all of the advanced functionality and features that already existed in our product now with Kubernetes support. Launch a Kubernetes cluster in minutes on any of our supported providers. Manage firewall rules, load balancers, service discovery, data volumes and more through the UI/API/or CLI. And best of all, our snapshot feature works exactly the same, allowing you to clone entire Kubernetes clusters to create new environments, to implement disaster recovery, or to expand and migrate into other availability zones, regions, or cloud providers altogether.
The Path Forward
Our initial release of Kubernetes support marks a milestone in our evolution toward providing the premier multi-cloud platform for developers. As it stands today, we are supporting the common features amongst our own scheduler and Kubernetes, but we’ll continue to add additional functionality that helps teams to take advantage of the full breadth of options and features provided by the project.
As we work toward our goal of blurring and then erasing the lines between the cloud providers and private data centers of the world, Kubernetes will provide a stable, scalable, and community driven building block that gives users confidence that they are building on top of something open, and never again locking themselves in.
Feature image: The last spike being driven into the Transcontinental Railroad for the Canadian Pacific Railway, circa 1885, from Library and Archives Canada, in the public domain.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.