Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
Cloud Native Ecosystem / Containers / Kubernetes / Networking / Open Source Mixes Containers, Hypervisors, and Something Called ‘Hypernetes’

Nov 1st, 2016 9:12am by
Featued image for: Mixes Containers, Hypervisors, and Something Called ‘Hypernetes’

A new container technology called or just “Hyper” (formerly HyperHQ, and not to be confused with Microsoft’s Hyper-V), could conceivably alter the course of containerization. Like dotCloud, which eventually became Docker, Hyper is a containerized workload deployment and hosting service. It’s a PaaS that calls itself a “CaaS” (containers-as-a-service). Its infrastructure is entirely open source and published on GitHub.

Like Docker, Hyper presently supports Open Container Initiative standard format containers, and according to contributors to the CRI-O project — whose goal is to develop a stand-alone, run-only container engine for Kubernetes — Hyper’s engineers have become key contributors. By involving itself with OCI, and also with the recent CRI-O project, Hyper has slowly, iteratively, garnered the attention of Red Hat, Google, and CoreOS, all of whom have acknowledged its contributions.

“The problem of a VM is not the virtualization; the problem is the machine. It always tries to emulate an entire machine,” — Peng Zhao, CEO,

Just as the technology behind the dotCloud platform truly revolutionized the data center, the technology behind Hyper could conceivably alter the course of containerization. Like VMware’s VIC, Hyper would not replace the Docker or CoreOS container engine. But it would change the container runtime, altering it in a way that’s compatible with OCI. The product of that alteration is a true virtual machine, compatible with KVM or Xen.

The Rest of the Equation

The Hyper platform introduced itself to the world last year as a way to “make VMs run like containers.”  That phraseology led to some unfortunate misinterpretations of its purpose. Hyper is not a way to extend VMware-style VMs into a container environment, although coexistence is arguably one of its goals.

Peng Zhao,’s CEO, is perhaps too modest to call himself the father of Hyper — so we will. In an interview with The New Stack, Zhao carefully explained how and why Hyper works the way it does.

“The biggest, most ingenious part of Docker in terms of its idea,” Zhao told us, “it actually pivots us from looking at the app as a server or a machine, to an application-centric perspective.


“The problem of a VM is not the virtualization; the problem is the machine. It always tries to emulate an entire machine. You get full-blown Linux or Windows; you have every hard drive and device emulated.  But in terms of cloud hosting or services, you don’t need all of these things,” Zhao said.

At the core of a Hyper container is its runtime. Called runV, it’s a variation on the OCI’s runC container runtime that produces a micro-VM that is manageable by a KVM or Xen hypervisor. Because it’s OCI-compatible, said Zhao, Docker’s own CLI can control runV. If you subscribed to the notion that a container environment is missing a hypervisor, and is thus incomplete by design, runV completes it.

“Anybody can use runV to run secure containers in their infrastructures,” he added, noting that the little hypervisor has already earned support from Huawei, China’s premier network equipment maker. He also told us that IBM is using Hyper for its own hypervisor-driven Docker systems (which IBM first demonstrated as a proof-of-concept two years ago). These aren’t just x86 servers, but System z mainframes.


From the container engine’s point of view, a Hyper container is just another container.  Since runV behaves exactly like runC, the engine perceives no difference. What the engine doesn’t know, or doesn’t have to know, is that Hyper instantiated the image by first creating a micro-VM, and then injecting the Docker container image into its memory space. While arguably that image could have come from a container repository, in the context of the platform, the image is just a file.

Once injected into the micro-VM, the container image shares space with a Linux kernel. “There’s no traditional guest OS in that VM,” said Zhao. “There’s no CentOS or Ubuntu or even CoreOS in that VM — there’s only one guest kernel.”

This guest kernel is not what containers typically use. It’s actually smaller than even the miniaturized CoreOS, with even less functionality. He described this component as the “secret sauce” of Hyper (as secret as an open source component is allowed to be).

Although the micro-VM was designed for compatibility with existing open source hypervisors, the platform instantiates them in its own unique way. Zhao described what he calls the KVM VM fork, the purpose of which is to achieve very rapid scalability. Here is where the guest kernel’s primary function becomes clearer.

“Instead of launching each individual Hyper VM from scratch, we have a frozen Hyper VM guest kernel in the host,” he explained. “So when you try to launch a new Hyper VM, we just fork that frozen one and resume it.” Zhao boasts (to the extent that he boasts at all, which isn’t much) that his platform typically unfreezes a Hyper VM in no more than 20 milliseconds.

Fusion Jazz

From Zhao’s point of view, Docker solved the first part of the big problem: condensing virtual workloads to a manageable size. Hyper solves the remainder: packaging those workloads into a VM that the rest of the world can use.

“Right now, people are actually running their containers inside of VMs, on something like EC2 or DigitalOcean,” he remarked. Linux containers require an extra layer of isolation for multitenancy, he argued, that a standard container environment does not provide (VMware also makes this argument).

“But if you can secure the runtime somehow, one way or the other, you can replace VMs with secure containers as the building block for your public infrastructure. That can change a lot of things,” — Peng Zhao, CEO,

In a VM environment, the hypervisor marshals all transactions between the host operating system and the guest machine. In Hyper, the hypervisor is no different. This way, containers’ dependency upon the Linux kernel hosting them — which is, as some believe, the single most dangerous unexploited vulnerability in container systems today — is eliminated.

The popularity of Docker brought forth a vision of orchestrating workloads as interoperable components, as opposed to managing fake networks that support real applications. But in the midst of solving that problem, Docker created a new one — a dilemma over who or what is in charge of each process, which at one time threatened to fracture the community of containerization developers into shards.

Now, the whole systemd argument — one of the lingering wedges splitting the containerization community that fueled talk of a possible Docker fork a few months back — could conceivably be rendered moot.

Hyper containers managed by existing hypervisors could also reintroduce a missing element in many container environments — one whose absence prevents organizations today from wanting to move them into full production: policy.  Applying security policy to containers the same way they’re applied to VMs would probably eliminate the need for so-called unprivileged containers, which would stop containers from being run with root privileges by default.

Being managed in a VM environment could mean that containers find themselves less scalable than before. But Hyper may have a solution even for this in the works, which is why the team has been working with the Container Runtime Interface project.

Peng Zhao calls it Hypernetes. It’s an orchestration environment that blends code from Kubernetes with code from OpenStack, fusing Kubernetes’ application-centric perspective with Neutron’s gift for managing software-defined networks.

“Moving forward, we want to make sure the orchestration of the engines is multitenant and secure,” he told The New Stack. “So we’re actually developing a new product called Hypernetes, which is Hyper container plus Kubernetes plus OpenStack components, like their storage and software-defined networking (SDN) components. We’ll use that Hypernetes product to deploy into production our container-native cloud service,”

There have been many times in history when pluralistic open source communities have spent their time staging arguments, while individual open source contributors have quietly resolved them. Let history record, even for those future times when history tends to correct itself, that may have been the ultimate solution none of us saw coming.

CoreOS, Docker, IBM, Red Hat are sponsors of The New Stack.

Title image of an ancient Chinese dui container (for food), from Hubei Provincial Museum, taken by Zhangmoon618, and licensed through Creative Commons 3.0.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.