TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Containers / Kubernetes

Docker Engine 1.12 Comes with Built-in Orchestration Capabilities

Jun 20th, 2016 9:00am by
Featued image for: Docker Engine 1.12 Comes with Built-in Orchestration Capabilities
Feature art: Tetris-themed street art, Seattle.

Rather than relying on external software, Docker Inc. wants to make orchestration a built-in feature to its core container engine.

The upcoming release of the Docker Engine, version 1.12 due in July, will come within built-in orchestration capabilities, allowing users to manage complex containerized applications without additional software, using the same command line structure and syntax that developers are familiar with using the Docker containers.

Previously, these orchestration capabilities, as they are known, have been offered through third-party tools, such as Kubernetes or Docker Swarm.’

“As Dockerized applications get deployed more broadly, we have to make technology that aligns very closely to what system administrators are used to,” said David Messina, Docker vice president of marketing.

Docker_Engine

An optional feature, the Docker Engine software can act as a node in a self-organizing cluster of similar engines across multiple servers, which can work together as a single scalable system to run large fleets of interconnected Docker containers.

The orchestration model of application deployment offers a number of benefits, especially those built in a microservices architecture. Both the applications and the container runtime environment can be easily scaled up to meet demand. It also ensures a way of running applications across different hardware environments, using Docker as the unifying substrate.

Coming Together

Docker Engine in effect “becomes a node running on a virtual machine or a physical machine,” Messina said. “The primitives for orchestration exist in every engine, including the ability to self-organize and discover other nodes automatically.” The operators choose which engines are the managers and which are the workers.

Docker_Engin_Arch

The cluster of Docker nodes, or “Swarm” in Docker parlance, is self-healing, in that an application that was running on a failed node is restarted elsewhere. Workload information is captured in a strongly-consistent distributed data store, within each engine.

“Each engine is acutely aware of the state requirements of the applications and services and is aware of each container that is running on every worker engine,” Messina said.

“There’s no external dependency on a third-party key-value store,” he said, referring to how Docker Swarm relies on the external etcd distributed key-value store. The distributed key-value store being embedded in each Docker Engine is based on the Raft consensus algorithm.

“Because they are all sharing active knowledge about the state when a scheduling request comes in, it is stored in memory, so the scheduling can happen in memory as well, so there is no blocking,” Messina said. “There is a constant reconciliation between manager and nodes, to make sure if a node goes down things will be rescheduled to another worker.”

Docker_Engin_Arch-Service_Deployment

To automate workflow, developers can deploy a declarative service deployment API, which defines the services, storage, networking and compute resources. The API offers the ability to declare abstractions above the container layer, including services, images, scale and ports. Developers can use Docker Compose to declare the requirements and turn them into service requests.

Advanced deployment executions can also be carried out through the APIs, including canary and blue/green rolling updates. Application-specific health checks can be carried out as well. The administrator chooses how new nodes can join a swarm, either automatically, manually, or by a cryptographically secret handshake.

In this Docker command, five instances of a container are spun up into a new service, called Frontend, connected over the “My Overlay” network and communicate externally through TCP port 80. The management software ensures that five copies of the container are always running, and that they are always the latest version of the container.

In this Docker command, five instances of a container are spun up into a new service, called Frontend, connected over the “My Overlay” network and communicate externally through TCP port 80.

The software uses a multi-host overlay software to provide a single networking space for the swarm and offers automatic load balancing. Automated service discovery is carried out through a DNS service. Users can build overlay networks crossing multiple clouds using Docker’s plug-in architecture and third-party software. Docker_Engine-Router_Mesh

All communication among the nodes is encrypted and carried out through TLS, and a PKI service offers automatic certificate rotation. A general purpose framework called the Cryptographic Node Identity, proves support for identifying and managing sensitive workloads and networks.

“An engine has to be cryptographically identified to be part of a swarm, and they communicate automatically over TLS,” Messina said.

Docker_Engine-Security

Although many of the capabilities of Docker Swarm have been embedded within the Docker core engine itself, the company will still maintain and update Swarm for the indefinite future, Messina said.

A release candidate of Docker Engine 1.12 has been issued, and the company suggests trying the new capabilities on the newly launched Docker for AWS and Docker for Azure cloud services, both of which offer integration into their backend storage services.

Commercially supported versions of the Docker Engine can be obtained from Docker, HPE or IBM, and can be run on Red Hat Enterprise Linux, Ubuntu or HPE Linux. The company will offer deeper dives into this new technology at its user conference, DockerCon, being held this week in Seattle.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.