Recently, I came across a new Containers as a Service (CaaS) offering called Hyper. After signing up and installing their CLI, I liked the overall workflow of managing containerized applications. Having extensively worked on containers, I feel that Hyper got many things right. In a way, this is how Docker should have been designed in the first place.
First things first, Hyper is not a replacement to Docker. It uses Docker images pulled from Docker Hub. But what’s most appealing is the way you launch containers without ever spinning up a VM to host the container.
Almost every CaaS deals with the creation of a cluster. Hyper lets you focus on containers without worrying about the size and configuration of the cluster. In other words, Hyper is attempting to do to containers what Amazon EC2 has done to servers. If EC2 is IaaS, Hyper is a true CaaS.
So, how is Hyper different from other offerings? Technically speaking, Hyper replaces the container runtime with the hypervisor. The Docker-compatible API that it exposes directly talks to the hypervisor instead of a container engine that runs within a host. This sounds similar to what VMware and Microsoft attempted with their hypervisor-based containerization strategy. The key difference with Hyper is that the container does not run on the host kernel. Instead, every container gets its own, independent, guest kernel. With this approach, the application running inside is perfectly isolated from the host as well as other containers.
The core technology behind Hyper is an open source project called HyperContainer, which is a hypervisor-independent technology that can run Docker images directly on the underlying hypervisor. The official documentation of HyperContainer defines it as the combination of hypervisor, kernel, and Docker image. HyperContainer currently runs on KVM, QEMU and Xen. Support for other hypervisors is in the pipeline.
Almost any workload that goes live in a cloud environment needs a few primitives such as a public IP address, persistent storage, basic logging and monitoring. Even a simple WordPress blog or a Drupal site depends on these core features. Launching a containerized workload in production environment deployed in IaaS starts with VMs, block storage, virtual IPs, and monitoring. Administrators and DevOps need to manage both the host as well as the container. Hyper tackles this workflow in an effective manner.
With no hosts and VMs to manage, containers are directly launched from Docker Hub. They can be mounted on SSD-based persistent block storage. Each container can be associated with a public IP. Security groups can be configured to allow or restrict traffic. Volumes support live snapshots for point-in-time backup tasks.
Customers are billed on a per-second basis than the traditional hourly or per-minute billing.
I wanted to take Hyper for a spin by launching a WordPress application. Below is the screenshot of the commands I ran.
Both MySQL and WordPress containers are backed by an SSD disk that automatically gets mounted when they are launched. Each user account gets an isolated subnet based on a virtual private network. Hyper doesn’t charge customers for the network traffic and bandwidth.
In less than four minutes, I had a publicly accessible WordPress site that was ready to be configured. It is pretty obvious that Hyper uses Docker workflow. That is one of the reasons why I got hooked onto it immediately. This workflow is technically the same as launching a CoreOS EC2 instance inside a public subnet of a VPC backed by an SSD EBS volume and an Elastic IP.
Hyper has a simple dashboard for launching and managing containerized workloads.
When running multiple homogenous container replicas, users can create a Service, which an abstraction that routes the traffic to all the containers that match specific criterion. This feature gave me a hint that Hyper may be running Kubernetes behind the scenes.
Hyper instantly appeals to two kinds of audience: Developers familiar with PaaS, and administrators who use a service for virtual private servers (VPS).
While Docker has done a phenomenal job of democratizing containers, the gap between Dev and Ops is still wide open. Some of the recent investments from Docker such as native engines for Microsoft Windows and Mac have taken containers much closer to the developers. But the workflow involved in managing a container in a development environment is very different from managing the same in production.
Hyper’s uniqueness lies in preserving the workflow defined by Docker. Developers who used PaaS in the past, and familiar with Docker will connect with it immediately. They will fire the Docker build command on the source code, push it to the Docker Hub, and then switch to Hyper to launch the same application in the public cloud. This workflow brings a PaaS-like experience to developers.
Hyper will enable new scenarios of building complex CI/CD pipelines that connect multiple environments. It’s compatibility with Docker API and CLI will make it possible to easily extend the DevOps toolchain for managing complex build processes.
Mature container orchestration engines such as Kubernetes, Marathon, and Swarm are designed for managing large microservices-based applications running in production. But not every workload needs the scale and reliability of a web-scale container platform. For them, Hyper offers a simple, intuitive way to run containerized applications in production.
The hyper-scale cloud providers like Amazon, Google, Microsoft will continue to invest in containers and CaaS. But their target audience is very different when compared to the users of Hyper. While enterprises with significant investment in these public clouds will prefer to use container management platforms offered by them, a cross-section of developers, system administrators, and businesses will use Hyper.
The competition in the CaaS market is heating up. We will witness a wide range of offerings targeting different use cases, scenarios, and users.
CoreOS, DigitalOcean, and Docker are sponsors of The New Stack.