TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Containers / Kubernetes / Security

Guide for 2019: What to Consider About VMs and Kubernetes

This post has been sponsored by Red Hat. One of the core considerations when adopting Kubernetes and containers is security and how to ensure that containers run securely in a multi-tenant environment. Containers run as isolated processes on a shared Linux host, and you often run multiple containerized applications in a Kubernetes cluster comprised of multiple hosts.
Feb 18th, 2019 10:52am by
Featued image for: Guide for 2019: What to Consider About VMs and Kubernetes
Feature image via Pixabay.

Joe Fernandes
Joe Fernandes is the vice president of products, cloud platforms for Red Hat, including OpenShift Container Platform, OpenStack and Virtualization. Prior to joining Red Hat, Joe was the director of product management for application quality management solutions at Oracle and served as the director of product management and marketing for Empirix's Web business unit prior to its acquisition by Oracle. Joe has spent the past 15 years helping customers build, test, and manage enterprise applications. He holds a BS in Electrical and Computer Engineering from Worcester Polytechnic Institute and an MBA from Boston College.

In a rundown of guideposts for the Kubernetes community in 2019, the topic of Kubernetes and the return of virtual machines was discussed. Virtual machines are not replacing containers, but rather, VM usage is evolving across multiple layers of the Kubernetes stack.

In recent years, containers have become synonymous with cloud native application architecture. They have redefined the way we package, distribute, deploy, and manage applications. But containers, as we know them today, are themselves a re-emergence of existing Linux technologies combined in a new and more useable way. While many organizations are migrating VM-based applications to containers, virtualization is still pervasive in both the data center and public cloud. We also see virtualization technology coming together with containers and Kubernetes in new ways, providing innovative solutions to new problem sets. In other words, VMs are becoming part of a cloud native architecture, too — this is container-native virtualization.

The bedrock of Kubernetes remains the orchestration and management of Linux containers, to create a powerful distributed system for deploying applications across a hybrid cloud environment. Kubernetes often runs on top of a VM-based infrastructure, and VM-based workloads, in general, remain a large part of the IT mix. Entering 2019 there are three key trends at the intersection of Kubernetes and virtualization that we expect to see playing out, each of which we will examine further:

  1. Kubernetes orchestrating micro-VMs to provide stricter multitenant isolation for untrusted workloads.
  2. Kubernetes orchestrating and managing traditional VM-based workloads (via KubeVirt) alongside container-based workloads.
  3. Kubernetes clusters increasingly being deployed on bare metal servers, as an alternative to Kubernetes on VM-based environments.

None of these is a new idea by itself, but in 2019 we expect to see the momentum behind each of these trends take hold. Each capability taken in isolation can be independently useful, but together they illustrate how Kubernetes continues to evolve and be applied to an even broader array of applications past, present, and future.

Kubernetes Orchestrating Micro-VMs

One of the core considerations when adopting Kubernetes and containers is security and how to ensure that containers run securely in a multi-tenant environment. Containers run as isolated processes on a shared Linux host, and you often run multiple containerized applications in a Kubernetes cluster comprised of multiple hosts. There are multiple layers of container security, from the Linux host level to the Kubernetes cluster level, that protect those applications from being exploited. These include Linux kernel-level capabilities like cGroups, namespaces, seccomp and SELinux which ensure that containers can’t exploit the underlying Linux host or other containers. At the Kubernetes cluster level, features like role-based access controls (RBAC), namespace tenant isolation and pod security policies enable multiple applications to run securely on the same cluster. Most users today are confident in the security these capabilities provide, and as a result, we’ve seen explosive growth in the number of customers running Kubernetes in mission-critical production environments.

For some users there may be a desire for even stronger multi-tenant isolation, whether that’s due to running untrusted workloads, having stricter security requirements or other reasons. This is where micro-VM based approaches like Kata Containers, Firecracker or gVisor have started to make their mark. Micro-VMs are not like traditional VMs that you might run on VMWare, AWS or other providers. Instead, they remix existing hardware-assisted virtualization technologies, like the Kernel-based Virtual Machine (KVM), within the context of application containers to provide a very lightweight virtual machine. Rather than trying to present a full “machine” as in traditional virtualization, this approach focuses on providing just enough VM to successfully execute an application container or function. As a result, you can’t just take a traditional VM and run it in a micro-VM based container, due to its functional differences and limitations. Instead, micro-VMs aim to provide hard isolation relative to standard Linux containers, while minimizing the trade-offs of traditional VMs in terms of cold start time and performance.

While technologies like Firecracker, Kata and gVisor have garnered a lot of attention, at this time there is no clear leader in this space in terms of user adoption, with each approach having its own inherent trade-offs. While we expect that the vast majority of workloads Kubernetes orchestrates will remain standard application containers, we are keeping an eye on this trend as micro-VMs continue to evolve in 2019.

Kubernetes Orchestrating Standard VMs

The Kubernetes orchestration engine provides a more scalable and flexible model for enterprise production workloads. Initially, this primarily came with a subtext — production workloads that are packaged as application containers. But through open source projects like KubeVirt we are seeing that this same powerful Kubernetes orchestration engine can be feasibly applied to manage standard virtual machines that would normally run in a cloud or virtualization platform.

In 2019 we expect this trend to continue and turn into a broader mindset change. What was previously a choice between VM-centric and container-centric infrastructure will be moot. Kubernetes will start to enable hybrid operations for containers and virtual machines… and it’ll be running on bare metal environments.

Container-native virtualization is a concept that enables virtual machines to follow the same workflow as container-based applications in Kubernetes. Previously, virtualization stacks were completely separate silos from Kubernetes and cloud native implementations — separate workflows, separate tools, separate teams, etc. But as digital transformation takes hold, the need to unify these disparate technologies, processes and teams become paramount. In using container-native virtualization with KubeVirt, enterprises will be able to more effectively integrate their application operations and retain existing IT skills while still embracing modern infrastructure built on Kubernetes.

Kubernetes on Bare Metal, without the VMs

While virtual machines become a bigger part of the Kubernetes workload mix, we also see them becoming less popular as part of Kubernetes’ underlying infrastructure. While most Kubernetes platforms are deployed on VM-based infrastructure today, containers have no dependency on VMs to run. We see interest in running Kubernetes and containers on bare metal continue to grow.

Running Kubernetes on bare metal will enable applications to take full advantage of the underlying hardware, which is important as customers bring more machine and performance sensitive applications to Kubernetes. Running Kubernetes and containers on bare metal can also help organizations reduce VM sprawl and simplify their operations.

With the desire to avoid lock-in to any one provider or vendor, users are focusing on Kubernetes as a common abstraction layer for applications running across physical, virtual, private cloud and public cloud environments. We need to meet users where their needs are. This means offering Kubernetes across the open hybrid cloud, including on-premises and on metal.

Into 2019 and Beyond

While Kubernetes has now been around for several years, innovation continues to accelerate as this next era takes off. Red Hat first got involved in Kubernetes in 2014 as part of the initial project launch, and have been offering enterprise Kubernetes in the form of Red Hat OpenShift Container Platform since our version 3.0 (based on Kubernetes 1.0) launched in June 2015. As we focus in 2019, we see these trends playing a significant role in the community ecosystem and for enterprise customers.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.