Docker Basics, Part Zero: Why Should I Care about Containers (and Microservices) Anyway?

Containers (and microservices) are the future of application delivery, at least until the next Great Tech Leap Forward comes along, and Docker is the category killer platform. Companies are adopting Docker at a remarkable rate these days. And, increasingly, all developers — and systems administrators, and cloud administrators — need to at least have a functional grounding in Docker technology. Or, even better, get busy adding it to their tool belts.
Docker has been with us for awhile. You might think you personally don’t need to worry about container stuff because it’s a Linux thing, or because it’s a DevOps thing or any other not-my-paygrade issue. For a long time, this was pretty much true. Within the past six or so months, however, containers have made a great leap forward. The most recent Windows Server release, in September 2016, introduced Windows Containers built right into the OS. Then in February, Oracle announced its new Container Cloud Service, optimized for B.Y.O. Docker containers to run seamlessly in the Oracle Public Cloud.
In this series, Docker Basics, The New Stack is here to help you get going with Docker in our new series of articles taking you step-by-step through the Docker learning curve. Step one: understanding where containers came from, what they do, and why you care.

Why you care: Docker’s popularity is growing exponentially.
A Brief History of (Tech) Time
Once upon a time, we had a separate physical server for pretty much any given function: file server, mail server, print server, and so on. Much of the time, these servers sat idle until called upon to do their one special thing. Then virtualization came along, making it possible to use one server for multiple functions, and it was good.
Virtual machines, however, contain both the application and an operating system — meaning that each VM runs not only a copy of the OS, but also a virtual copy of all the hardware that the OS needs to run. In short, VMs are more effective than the old school single function server model, but still, take up a lot of system resources. Enter the next generation: containers.
By design, containers pack a lot more applications into a single physical server than a VM can. A container is a software-defined environment, abstracted from the host system and easily portable. In plain language, this means that individual containers each “contain” an application, but rely on a common and compatible underlying operating system layer which doesn’t need to be duplicated for each one the way a virtual machine does. This means a leaner footprint requiring lower overhead and significantly better/faster application performance.
The best helpful metaphor I’ve come across for understanding the difference between VMs and containers — which some people mistakenly refer to as “mini VMs” — comes from a blog Mike Coleman wrote for Docker. Basically, he says, a virtual machine is like a house: a self-contained edifice with its own electricity, water and security system to deter unwanted visitors. Containers are like apartment buildings. They still have all the necessary utilities and systems, but the resources are shared by all the units. Further, the apartments come in different sizes and configurations so you can rent only the space you need and not the entire complex. So each apartment is a container, and the shared resources are the container host.
We Must Protect this House. I Mean, Container
Now that we have our housing metaphor, you should think of containers as condos for application delivery technology. The core concept of containerization is virtualizing the OS to run applications concurrently on a single kernel host. In this case, “applications” can also mean services like HTTP servers, DNS, DHCP and more.
In container world, these are called microservices: compact and lightweight, yet elastic, services that are created for and run inside containers.
Containers have actually been around for awhile. LXC (Linux Containers) were introduced nearly a decade ago but their use was, for obvious reasons, limited mainly to Linux developers. Containers didn’t catch on right away because, although a powerful technology, they can also be difficult to use. In the earliest days, implementing a container stack required a high level of system engineering expertise available only to large companies like Facebook and Google (both of which run container-based systems).
This is where Docker comes in — the open source container platform that makes container technology useful to people without PhDs in kernel technology. And that arguably jump-started the current microservices revolution.
Docker containers virtualize an application’s operating system, splitting it up and compartmentalizing it. This allows code to be structured as discrete, individual chunks that can run anywhere Linux (and, now, Windows — for the record, Windows Containers are Docker containers) is running. Containers are the ultimate in portability.
Because a Docker app runs inside a container, and that container can run on any system that has Docker installed, developers only have to build an app once — for Docker. There is no need to best-guess in advance all the various platforms, devices and operating systems hardware platforms or operating systems where it will run and try to configure contingency code for each. Everywhere Docker runs, your app runs — and, increasingly, Docker does seem to be everywhere.
Why You Care
As a developer, you take part in one or more of the stops along the software delivery pipeline: designing the app, writing the actual code, testing, then (hooray!) launch. Development with Docker doesn’t much affect the app design and simplifies coding (see above).
At testing time, you still use the same testing tools, though with Docker containers it’s easier to maintain a consistent testing environment. When doing development with Docker, you test your app inside a container, and you ship it inside a container. Thus the testing environment is identical to the production environment — making it beautifully likely that your end users won’t discover problems the QA team missed.
The real bang for the buck comes at launch time. When packaging an app for production, using containers makes it easily portable between environments and platforms, where the app runs as a set of modular microservices. Complex applications get split into discrete units — for example, the forward-facing part of the app resides in one container, the database might run in another. This reduces the complexity of managing the app once released into the wild because a bug in one part does not mean overhauling the entire application. Ditto with updates — only the applicable code, nicely tucked away in its container, needs to be addressed. Meaning no dependencies to come crashing unexpectedly down in some other, entirely unrelated part of the app.
Containers are here. Now let’s use them.
Docker is not the only container system out there. It is, however, the most widely used. It is open source, platform agnostic, and has simple tooling. The power of Docker can be seen in Microsoft Containers, which was built to work precisely with Docker management tooling.
Up next Friday: Docker 101. Learn Docker terminology, concepts and tools necessary to help that cute blue whale deliver your containerized project much faster.