TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Containers

The World is Programmable With Containers

Oct 26th, 2015 4:14pm by
Featued image for: The World is Programmable With Containers

This is a chapter from our ebook: The Docker and Container Ecosystem. It’s one of several chapters from the ebook that we will post here over the next several weeks.

Docker and container technologies symbolize a new economic reality that puts the developer at the center of the transformation from big machines to application-driven systems. It’s the shift from heavyweight to lightweight technologies, and from human to automated systems, that’s apparent in the Docker and container ecosystem in a number of ways:

  • The Internet is being programmed, and it needs plumbing for things to work.
  • Application development is faster than ever.
  • Open source communities are proliferating and becoming more commercial.
  • Programming languages are making it easier to build software.
  • The need is coming for automated infrastructure and scaled-out distributed resources.
  • It will increasingly be more about performance than compatibility.

Container technologies have a long history. Docker is simply a new iteration that makes it easier and more convenient to design, deploy and manage applications. Containers are single processes, parts of systems that are now mutating into different forms. There are new types of containers, platforms, open source projects, orchestration systems, service discovery tools, schedulers and a shift in market influence.

book1_launchday_blogpostimage

The Docker and containers shift is forcing companies to rethink how platforms and orchestration services can manage new, lighter workloads. This indicates a change from virtualized infrastructures to container-centric, distributed resources that abstract away the complexities that have historically come with developing apps on cloud services and hosted environments.

Docker operates on top of the infrastructure and syncs with the developer’s laptop. Docker technologists will often refer to Docker as a way to ship, build, run and deploy applications. It’s an open platform for distributed apps. It works wherever Linux does, which is essentially anywhere; it also works on Windows. Docker is not reliant on a separate operating system; it just takes advantage of already-built technology.

Docker is the work of Solomon Hykes, who founded dotCloud, a platform as a service (PaaS) company. Hykes built Docker as an API that isolates processes. It uses isolation technologies, such as cgroups and namespaces, that allow the containers to run independently on the Linux kernel without the overhead of starting up a virtual machine. It allows Docker containers to run independently, making it easy to move code. Virtualization technology from companies like VMware sits below the operating system and virtualizes the server, not the application. Wherever the virtual machine goes, the operating system has to go with it. It has to be taken down, then booted back up and configured to run with the database and the rest of the stack that it depends on.

[cycloneslider id=”ebook-1-sponsors-2″]

Virtualization is not independent of container technology. VMware, for example, has developed a platform that uses virtual machines to insulate containers. Photon OS, as it’s now called, will serve as the agent that gives VMware’s vSphere management system visibility into the operations inside containers. It means containers that include Photon OS will be somewhat different from containers that don’t include it. It is an alternative platform to vSphere. This new Photon Platform, as VMware has dubbed it, is intended for “cloud-native” containers only — for data centers intending to deliver software as a service (SaaS) where vSphere is not already established, nor intends to be established.

The premise of an application-centric infrastructure speaks to a shift that is less about the machines and more about the sophisticated software and services that make the world run.

It’s this sophisticated infrastructure that makes it possible for startups to build services faster and cheaper. That’s what makes the new stack significant in so many ways. It allows companies to be far more agile than others using heavyweight technologies that rely on proprietary software and high IT overhead.

The market is now witnessing a change that affects the companies that have historically built technologies that were designed for desktops and data centers.

The New Efficiencies of Immutable Infrastructure

Market Reality: There are billions of people in the world and almost everyone has had some contact with the Internet, even if they may not realize it. There are millions of developers who are building the new foundations for how we live and work. In the meantime, their operations counterparts are doing the plumbing to make the Internet more programmable.

The Result: The arrival of software and systems that speak to efficiency, convenience and performance over compatibility.

Patrick Chanezon is a member of the technical staff at Docker. He presented at DockerCon in June 2015 and made the argument that millions of programmers means new innovations. It is these innovations that will change almost everything we know about software today.

“Container technologies have become a software layer to program the Internet” is, in essence, the argument Docker makes. Its technology is a software layer. It’s not a container technology play; it’s a play to be the software platform that programs the Internet for the millions of programmers building services for a world that has an infinite number of programmable nodes. According to this view, anything can be a node. Almost anything can become a digital object that can be programmed.

Docker sees container technologies as a programmable layer that is on top of the physical Internet.

Is it far-fetched to think that containers will be the layer that makes the world programmable? It’s more realistic to think of containers as part of a continuum, which is evident by the development of the current market. Serverless architectures are gaining favor as a way to abstract the complexities of distributed systems. Unikernels are gaining favor for being far more lightweight than container technologies.

Other companies in the container ecosystem are declaring their own ways to define this evolving continuum. Amazon Web Services is in a strong position with a new registry platform that integrates with its EC2 Container Service. The user specifies which Amazon EC2 Container Registry (ECR) repository to use and it will retrieve the appropriate images. It integrates with AWS Identity and Access Management (IAM) to simplify authorization and to provide fine-grained control.

Despite tremendous demand from people using container technologies, infrastructure has not been transformed. Security policies, load balancing, storage management, service discovery, service management, resource management and native container support are largely missing or still inadequate for production workloads.

Virtual machine bloat, large attack surfaces, legacy executables and base-OS fragmentation are a common problem, as pointed out by Darren Rush in a look at a post-container world.

The need is for immutable infrastructure. That means creating something and then leaving it unchanged. Don’t update it, just create something new. Once the image is working, only a working image is deployed. The old version of the image can be kept in a container if, for example, there needs to be a rollback of the environment. An entire infrastructure can be timestamped, making it far easier to scale-out horizontally — not just from a faster deployment, but by actually adding more machines to make processing faster.

This new generation of immutable infrastructure reflects how mutable environments have become difficult to manage. System administrators managing servers need to have logins and accounts. They have to manage software with mutable updates that can succeed or fail. These are technologies that have various states of repair or disrepair. Setting up immutable servers that are configured to work once and are deployed as is, removes many of these types of issues. It removes the burden of manual updates. Let the machines take control.

How will this change happen? Adrian Cockcroft of Battery Ventures argues that DevOps is the outcome of this sort of transformation, and that essentially means a reorganization for most companies. But with a microservices approach, an immutable infrastructure can allow for steep cost reductions and a high rate of change. Developers can build and deploy services in seconds: Docker packages them and the microservices environment runs them in what amounts to fast tooling that supports continuous delivery of many tiny changes.

These new microservices environments are not easy to manage. Think of the speed involved, the scale needed across continents, regions and zones — then you get a picture of how complex it can be. The flow looks more like a ball of tangled yarn than a traditional flow chart. Failure patterns need to be understood across zones.

The Container Combo

Docker and containers offer portability, speed, configuration and a hub, much like GitHub, according to Cockcroft again, who wrote on the topic for The New Stack.

The portability comes through Docker’s packaging, which defines the packages of any Linux application or service, Cockcroft wrote. A package that is created and tested on a developer’s laptop using any language or framework can run unmodified on any public cloud, any private cloud or a bare metal server. This is a similar benefit to Java’s “write once, run anywhere” idea, but is more robust and is generalized to “build anything once, run anywhere.”

Then there is speed. A Docker container can be launched in a second, as opposed to a virtual machine which may take tens of seconds or even minutes. Configuration is not really a matter of concern as each update becomes a new version, or in other words, a new container.

It’s this speed that is most transforming. Speed means a lower barrier for taking risks with trying new ways to speed up app development and management. However, we have barely come to understand what the outer dimensions of this new capability means to us all.

“You are going to see a new order of magnitude in terms of swarming of compute running for shorter time periods,” said John Willis in a story from The New Stack earlier this year. Willis and his colleagues later sold their company Socketplane to Docker.

“Now it is a matter of nanocompute. It could go from 1,000 to one billion instances starting and stopping in a week.”

Accelerating deployment speed means a lower barrier for taking risks with trying new ways to further speed up app development and management.

The startup time for a container is around a second. Public cloud virtual machines (VMs) take from tens of seconds to several minutes, because they boot a full operating system every time, and booting a VM on a laptop can take minutes.

Docker also simplifies deployment. Dockerfiles manage services with higher level abstractions. A Dockerfile uses links, which are an abstraction above specific IPs. This makes deployment more generic and loosely coupled. Dockerfiles specify a loose set of services and their connections, which is a lighter and more flexible abstraction.

Docker containers are shared in a public registry at Docker Hub. This is organized similarly to GitHub, and already contains tens of thousands of containers. Because containers are very portable, this provides a very useful cross platform “app store” for applications and component microservices that can be assembled into applications. Other attempts to build “app stores” are tied to a specific platform (e.g., the AWS Marketplace or Ubuntu’s Juju Charms) or tool (e.g., the Chef Supermarket), and it seems likely that Docker Hub will end up as a far bigger source of off-the-shelf software components and monetization opportunities.

Summary

In all, an application-centric approach has deep roots in the Linux ecosystem. There is a rich history of tooling that has allowed for a market of compatibility. Linux runs everywhere and everything runs on it. But these systems were not built for efficiency. There is a lot of code and a lot of complexity in the system, including permission checks on the operating system that stem back from a time when massive monolithic systems were built into single machines.

Today, performance is becoming a key-value driver for containers, but they still have an associated complexity. And that’s why there is such a diverse ecosystem: it is needed for users to build modern architectures that can take containers from the laptop and into distributed environments — environments that can manage any number of microservices that are fast, efficient and running at the highest possible performance.

Docker, IBM and VMware are sponsors of The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.