Storage / Sponsored

How Software-Defined Storage Impacts Containers

27 Sep 2016 9:08am, by

In this episode of The New Stack Makers podcast, we explore how container storage has evolved over the years, as well as the parallels between VMs and containers when running functions, and how open source is affecting the container landscape.

Intel Senior Principal Engineer and System Architect David Cohen joined TNS founder Alex Williams during the 2016 Intel Developer Forum to offer his insight on these topics and more.

This discussion is also available on YouTube.

The conversation began with an overview as to how software-defined storage integrates within one’s network and physical storage infrastructure. Cohen explained that the traditional hypervisor often utilizes data files nested inside of other files, which he noted achieve a better response time when flattened into a single layer.

“The container is a popular compute model. The expectation is you can run them anytime, light up a container very quickly and tear it down just as quick. Rather than having physical servers, or later, VMs that run for long periods of time in static deployments, now I have a dynamic deployment model,” said Cohen.

But as more containers are deployed, stateful storage becomes a necessity. Cohen noted that having even a slight degree of state management will enable developers to better inventory the individual components making up their system.

When it comes to the parallels between containers and VMs, Cohen highlighted how containers queue and process functions. Compared to VMs, containers streamline the queue model by allowing for better handling of inputs and outputs.

“In a queue model, basically I get an entry posted to queue, and a thread wakes up and processes it. With containers, it’s kind of same thing. When I think of Lambda, I can get a data event that basically acts like a queue event. Thinking about inputs to functions and outputs is much simpler than having to deal with a full blown VM,” said Cohen.

Cohen noted that at common thread in today’s container-based application development ecosphere is that of migrating pieces of one’s system away from the compute layer. For systems taking advantage of non-volatile memory, this performance boost also adds flexibility to one’s container deployments.

“We’re seeing [a] disaggregation of devices from compute. For a long time now people have been using direct-attached devices on the compute nodes. With the increase in capacity and raw performance of non-volatile memory Express [NVMe], there’s a really compelling reason to share devices over the network so you can get better utilization of the resource. This allows you to take an NVMe device with huge amounts of random access performance, and partition that up so that it can be accessed through parallel by a bunch of different containers or host machines.”

Intel is a sponsor of The New Stack.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.