Storage

Software-Defined Storage or Hyperconverged Infrastructure?

21 May 2018 1:50pm, by

Kiran Sreenivasamurthy
Kiran Sreenivasamurthy is vice president of product management at Maxta. Kiran combines technical, marketing and leadership skills in his role of managing products from the concept and release to revenue stage. His previous roles include serving as lead software product manager for HP, product manager for application and data availability specialist Mendocino Software, and a technical marketing engineer for NetApp.

It’s easy to get software-defined storage (SDS) confused with hyperconverged infrastructure (HCI). Both solutions “software-define” the infrastructure and abstract storage from the underlying hardware. They both run on commodity servers and pair well with virtualization. Reporters, analysts, vendors and even seasoned IT professionals talk about them in the same breath.

But there are important distinctions between HCI and SDS. It comes down to how you want to manage your storage. SDS requires deep storage expertise; HCI does not. While there are some differences in capital costs, there is much more in operational costs. More so, each solves different problems and fits best for different use cases.

To start, let’s take a deeper look at what makes HCI and SDS different.

What Is Hyperconvergence, Anyway?

Vendors have certainly added to the confusion on this topic. Many talk about hyperconvergence as compute and storage together on one node, but it’s much more than that. Hyperconvergence simplifies the data center by collapsing compute, storage and storage networking into a single dynamic tier on standard servers — and integrates compute and storage services for virtual machines (VMs) so new infrastructure can be provisioned on-demand. This eliminates the single biggest challenge facing a virtualized infrastructure — dealing with scale and managing storage resources as demand grows.

With hyperconvergence, you don’t need to be a storage specialist. Hyperconvergence pools compute, memory and storage together in a single platform and makes storage available natively within the hypervisor, so all you need to do is add capacity as you go.

So What Is Software-Defined Storage?

SDS can take several different forms. The most prominent one creates a shared pool of storage using commodity servers with a software layer that presents it as a native storage object (typically a LUN or volume).

Software-defined storage (SDS) abstracts the management of physical storage, typically by creating a shared storage pool using industry-standard servers. It frees you from legacy storage arrays, or masks them underneath a software layer. That storage is managed separately from the compute and hypervisor layer.

SDS can be the right approach in many cases. The problem is that vendors with an SDS solution often describe it as hyperconverged. It’s still separate storage, which is a critical consideration when you’re selecting an infrastructure platform.

How Do You Know if It’s SDS or HCI?

Just like traditional storage, SDS resources need to be managed independently of the virtual machine. An SDS solution presents LUNS or volumes that need to be mapped to a data store in the virtual environment. Each time you provision a VM, you need to make sure storage is available or carve out another datastore. In other words, you need to be a storage expert and manage storage. As we’ll explore, this makes sense for certain workloads, but it’s important not to confuse the two approaches.

Hyperconvergence is much more than storage. It’s all about only having to manage the virtual machine. It’s still a software-defined infrastructure but is designed around the virtual machine construct as opposed to a storage construct. In a truly hyperconverged architecture, everything is managed at the virtual machine level. There are no LUNs or volumes to manage separately.

What Infrastructure You Need to Manage Matters

Data center infrastructure exists along a “spectrum of management” — that is, at one end of the spectrum is traditional three-tier infrastructure, with storage, networking and servers. It takes a team of specialists in each area to manage it. At the other end, the infrastructure is a DevOps model with self-provisioning infrastructure — no one needs to worry about day-to-day operations once it’s up and running.

Infrastructure Model

Who Manages It

SDS vs. HCI from an Architecture Perspective

As we discussed earlier, the two approaches have different architectures.

Software-Defined Storage

SDS abstracts storage from the underlying hardware — usually industry-standard servers, though some solutions also pool storage from existing arrays. It then presents the storage objects in a native file or block format that can support bare metal or virtualized workloads. Storage management — performance, capacity, and availability services — is done through the SDS interface and compute resources are managed through the hypervisor management interface.

Hyperconverged Infrastructure

HCI abstracts storage, networking and compute, and presents resources to the hypervisor directly in its native format. In VMware vSphere for example, storage is simply presented as a datastore object. HCI requires very little resource management within a cluster. To the extent that anything does need to be managed, it’s done primarily through a single console. That allows an IT generalist to manage everything.

How SDS and HCI Impact Capital and Operational Costs

The two architectures have different implications for cost, particularly operational ones. It’s important to understand the differences before you choose one over the other.

SDS Reduces Capital Costs But Can Drive Higher Operational Costs

Most SDS solutions run on industry-standard hardware, and don’t require high-end server hardware. That’s good news since you won’t have to spend on proprietary storage array hardware, and SDS lets you add capacity in smaller increments as you go — often by simply adding more disks to a node.

The biggest capital expense is typically the disks themselves. Some of this is determined by application performance requirements. Any workload that requires latency below one millisecond will require at least some expensive NVMe or PCI flash, likely backed by enterprise-grade SSDs. However, how efficiently SDS uses the storage also has a big impact on capital costs (this is true for HCI as well). Solutions with more efficient compression, de-duplication, and the ability to control things like the replication factor, will get more mileage out the hardware.

The real cost of SDS is the overhead of managing storage. You still need one or more storage specialists, and it still takes significant time to provision, manage and optimize the storage. In that respect, it’s not much different than storage arrays.

Hyperconvergence Reduces CapEx and OpEx Costs — Unless You Choose Poorly

Like SDS, HCI can use industry standard hardware and cut capital costs by as much as 80 percent compared to traditional three-tier infrastructure. But there’s a danger with HCI that you can end up paying hidden taxes.

Appliance-based solutions sometimes require that you add additional storage capacity or additional CPU by adding additional appliances, which limits your flexibility to both buy the right amount of capacity initially and also add capacity later. That means you often end up buying more compute or storage than you need. Appliances are also much more expensive when it comes time to do a hardware refresh since you’re rebuying the software appliances that often bind the software to the hardware.

Hyperconverged Software Provides the Most Flexibility

Software HCI solutions let you choose the hardware and add capacity or upgrade at your pace. That provides the same benefits as SDS from a CAPEX standpoint. Unlike hardware appliances, you can also easily add compute and storage independently — so you’re never overbuying hardware.

From an operational standpoint, HCI also eliminates storage management. Everything is “under one roof” and an IT generalist can manage all of it. Provisioning more storage requires minimal effort, cutting OPEX costs by as much as 60 percent compared to traditional storage or even SDS.

Choose a Solution Based on the Workloads and How it Will Be Managed

HCI is almost always the best choice for virtualized workloads. It’s designed specifically to work with virtualization, and it presents resources natively to the hypervisor. Whether you’re refreshing infrastructure for an existing virtual environment or starting a new project, it makes more sense to invest in HCI than either traditional storage or SDS.

This also holds true for containerized workloads — most of which run within a VM. Because HCI already abstracts the storage layer completely, it is a natural fit for any container workload running on a virtual infrastructure.

SDS is useful for workloads that may not be virtualized. Examples include some IoT or data collection applications that run in edge environments. Databases or other systems that may require a direct connection to storage can also be a good fit for SDS.

Choose the Right Tool for the Job

HCI is the multi-tool of the data center, or (for the amateur carpenters out there) the table saw. It’s an excellent platform for most workloads in an era when just about everything is virtualized. SDS is the Torx wrench set you break out when you have a project that calls for it.

Remember what you’re trying to solve. If you’re looking at HCI and SDS, you’re almost certainly trying to minimize storage complexity and cost by replacing traditional storage arrays and increase flexibility. Both approaches accomplish this, but HCI will deliver better results in a purely virtualized environment.

In the end, you may run them side-by-side in your organization. Some workloads will benefit from SDS (it just needs a Torx wrench), but most of the rest will work fine with HCI — the multitool.

Feature image via Pixabay.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.