From Containers to Unikernels and Serverless Architectures
The complexities of managing Docker and containers in production is one of the greater challenges that comes with its adoption. For millions of programmers and plumbers, the issue is about simplicity.
It’s evident that containers represent a major progression from virtual machines. But from a more universal perspective, this progression represents a continuum that will lead to serverless architectures and other more efficient means of managing complexity, such as unikernel technology.
Building a Containerized Architecture
Rally Software has a container architecture stack that engineer Matt Bajor used at the Strange Loop conference to discuss the complexities of microservices architectures.
Architectures vary on different platforms. The Rally containerized architecture has dynamic DNS and load balancers to manage the traffic, along with etcd for service discovery. There are object stores, memory stores and relational stores. The services are tied together with Kubernetes or Mesos and are managed with configuration management tools, such as Puppet or Chef. The hypervisor serves as a partitioning system for dividing the hardware. On top of the hypervisor are operating systems, such as CentOS. On top of that is the Docker runtime with the shared libraries and language runtime. We then have the application and the application configuration.
These systems, as a whole, are effective, but they are quite complex, Bajor said. How things interact at any given time is difficult to predict. There are many upgrade cadences to match. A large portion of it is redundant. Isolation occurs at the hypervisor layer, in the user processes and in the container. This is all for a single app on a single server, or a single user.
Heterogeneous and Over-generalized
There are lots of layers that the developer knows about implicitly, but does not have an explicit understanding of how they all work. Systems are historically heterogeneous and largely over-generalized. In new architectures, hardware is largely commoditized, with virtual machines working on hypervisors and virtual device drivers.
Bajor makes the point that the Linux kernel and users are natural enemies, as considerable complexities are built into Linux to keep apps safe from users, users safe from other users, and apps safe from other apps. That means a lot of code and complexity in the system. There are also lots of permission checks on the operating system. These checks have roots in an era when time sharing was necessary on larger systems. There were lots of apps and users, all working on the same hardware, all interacting and working together.
[cycloneslider id=”ebook-1-sponsors-2″]
There are also lots of inefficiencies to manage, Bajor said. There are virtual drivers on the system, but there may also be hard drives and even tape. There is a lot of storage and RAM that is often underutilized and taking up resources.
The Linux kernel has a large attack surface, making it easier to get into the system. Security patching is done, by an operations team, which creates incompatibility issues with the developer teams who are writing the code. It can be difficult to track the interactions between the two teams. Each changeset comes in different shapes and sizes, creating issues, such as outages.
There is a long progression of technology that has its roots in mainframes, the client/server era and now distributed systems. The integration of heterogeneous environments comes down to systems and software compatibility. Efficiencies have been sought by customers as they seek ways to make infrastructure more cost effective. Former VMware CTO Steve Herrod said in an interview that selling VMware’s virtualization technology in its heyday became a matter of helping a customer not buy another server. In those days, it was an easy sale, as customers needed ways to reduce, not increase, the number of machines running in their data centers.
Now with the advent of containers, we see a real effort to make things simpler. Compatibility is hard to do, but it is now easier to do because of rich tool ecosystems, which increasingly have their roots in open source. Microservices are easier to build. Containers are easier to deploy. Now the market is ready for the next phase, and that’s built on the premise of efficiency, and perhaps most of all, performance.
Unikernels: Meeting Today’s Performance Needs
Bajor points out that performance is a key-value driver for containers, but they do have an associated complexity. For even higher performance gains, there is a growing interest in technologies such as unikernels which, proponents say, simplify the technology stack.
Unikernels are uniquely specialized virtual machines, similar to an application stack — they have application binaries and virtual hardware underneath, Bajor pointed out. In the middle is a library operating system that has its own network stack. Unikernels are self-contained and have far fewer layers compared to a container stack.
Unikernels implement the bare minimum of traditional operating system functions. There are no permissions, nor is there isolation, Bajor said. They do just enough to enable the application it powers. By removing the traditional operating system layer, unikernels remove the unneeded bulk of standard operating system environments, along with their associated attack service. Unikernels are extremely light, allowing higher density on commodity hardware. They can run their own services that are born when the need appears, and die as soon as the need disappears. Some of these transient microservices may have lifespans measured in seconds or even fractions of a second. They are just-in-time computing services, which exist only when there is work to do, allowing you to maximize the use of your computing infrastructure.
Unikernels have a corollary to the new serverless architectures gaining popularity. We see this with services such as AWS Lambda, which Amazon is investing in deeply.
Lambda was devised to run user-generated functions in the cloud, without the need for the user to worry about any of the supporting stack running said functions. Lambda is a stateless computer service, meaning it runs a user-defined function that collects data from an outside service, works on the data and delivers the output to another service. The code has to be triggered by an external event, such as an incoming call from a mobile app, web service, or by another AWS service. A change in an Amazon S3 bucket or DynamoDB table can also trigger a function call.
Lambda is novel in that it strips away all need to worry about any supporting infrastructure. No more maintaining EC2 instances just to run a single function. Infrastructure issues, such as scaling or maintenance, are whisked away in abstraction. Typical Lambda jobs include image conversion, change notifications and file compression — in fact, AWS has prebuilt functions available to handle those specific tasks.
Summary
In summary, what does this say about the continuum that we see as the world develops new technology stacks? There is a new set of application patterns and deployments. We have achieved a degree of compatibility, and the next effort is to get better performance. Containers and unikernels are similar technologies, with unikernels described as “a Docker container on a diet.” By bringing unikernels to Docker, it could allow for greater familiarity with the technology.
If one is building mission-critical systems, unikernels give developers explicit control over core security areas of their application. Developers can choose the output result while working with unikernels. In comparison, Docker containers have everything they need to run enabled by default. In unikernels, many features are turned off by default, meaning more initial setup and choices for a project team. According to unikernel proponents, once these choices have been made, the result is resilient new stacks that are secure, while also being customizable to the needs of the project.
What does that mean for containers? In an interview, Docker CTO Solomon Hykes said the ramifications for what Docker is doing are pretty broad.
“Our focus is about continuing to enable a platform that is looking at the whole lifecycle of an application, so our focus will be on adding more and more innovation; our job is to simplify and accelerate that journey of the application lifecycle for organizations.”
But in the meantime, the real win will come to the organizations that can offer the speed that comes with lightweight systems. How they will do that will require a new thinking about how to transition from monoliths to microservices and all that goes with it, as we move along the continuum to an ever more programmable world.