Cloud Native / Containers / Microservices / Sponsored

Why Cloud Native Storage Requires Tightly-Coupled Containers and Microservices

9 May 2019 1:36pm, by

The Cloud Native Computing Foundation sponsored this post.

Nir Peleg
Nir founded Reduxio and as CTO he has architected its groundbreaking core technology. An accomplished high-tech industry executive and visionary with over 30 years’ experience, he holds over 20 U.S. patents and patents-pending in the areas of computer systems, distributed storage, data deduplication and encryption.

A number of studies have confirmed how cloud-based development and service deployments have seen a significant increase in the adoption of containers.

According to a Cloud Native Computing Foundation (CNCF) customer study, for example, 73% of customers surveyed ran containerized applications in production and the remaining 27% planned to use containers in the future.

In addition to survey data, as well as a surge in press coverage (while some may argue Kubernetes has generated its fair share of hype as well), therefore is no question that containers represent the next wave in infrastructure virtualization. This is because the benefits of containerization are significant: application portability, ease of deployment and configuration, better scalability, infrastructure elasticity, increased productivity, continuous integration and more efficient resource utilization.

In parallel, there has been an evolution in application architecture, of what started a couple of decades ago as service-oriented architecture (SOA) to microservices architecture. With a microservices architecture, applications are built as suites of services that communicate with each other using well-defined interfaces. Each microservice is independently deployable and scalable. A microservices-based application is designed with decentralized governance, decentralized data management, infrastructure automation, design for failure and extensibility in mind.

It is also no coincidence microservices architecture and containers have become tightly linked together, since containerization provides distinct, natural boundaries between different microservices. At the same time, using containers does not imply that an application has a microservices architecture; monolithic applications can be containerized, with a container becoming a monolith — or a single logical executable.

A drawback to a monolithic approach to containerization is changing cycles for components of the application are now tied together — a change made to a small part of the application requires the entire monolith to be rebuilt and redeployed. Over time, it is often hard to keep a good modular structure in the monolithic model, and if one part of the application needs to scale the entire application must scale, which is inefficient.

In other words, the writing is on the wall: many enterprises and developers have by now concluded it is not possible to fully realize the benefits of moving to containers without also adopting a microservices architecture. The mutually reinforcing benefits of adopting an approach to application modernization that is both containerized and microservices-based are too compelling to settle for half-measures, and anything supporting this modernization effort ideally would be similarly comprehensive.

Storage and Data Management

Stateless applications have primarily driven the initial adoption of containers, typically consisting of microservices that serve as the front end for a stateful back end that was not containerized. To fully move to a container-based infrastructure requires both stateless and stateful applications be implemented as containers. For this to happen, challenges with storage and data management in container environments need to be overcome to more effectively bring stateful applications into the containerized world.

Today, we are in a transitional state of how stateful applications in containerized production deployments are stored and managed.  Many rely on external, siloed storage devices that are not an integral part of the cloud/container environment but are mature and provide rich data management capabilities such as disaster recovery, data reduction, erasure coding (as opposed to mirroring) and live tiering.

To truly realize the benefits of containerization, the storage infrastructure must live side-by-side with the compute side of containerized applications in the same environment. This would greatly simplify management, reduce cost and improve resource utilization. Getting to this point requires a new approach.

Rethinking Storage Architecture

The holy grail of storage architecture has always been the separation of the data and control planes to allow independent scaling of data (the data plane) and metadata (the control plane) flows. In addition, separating the planes allows data management operations such as tiering, data mobility, or snapshots to be driven by the control plane without interfering with data path activity.

Storage implementations to date have not effectively separated control and data planes, with cumbersome standards, bolt-on incremental features and nonoptimal data flows often standing in the way. However, the emergence of containers and microservices gives the storage world a chance to leave those behind and start fresh.

Enter Microservices

Microservices architecture principles could quite naturally be applied to container-native storage system design. Control and data path separation, for instance, corresponds well with the “smart endpoints, dumb pipes” microservices design principle.

What might a microservices-based design look like and what benefits could it possibly deliver?

By separating the control and data planes, a microservices-based container-native storage solution would have distinct entities of control (metadata) and data services that scale independently and jointly provide services (IO and data management) in a highly scalable, distributed fashion — much like microservices-based applications. It could be argued that implementing a storage system using microservices not only enables, but actually forces the separation of the control and data planes.

  • Capacity and Performance Scaling: A microservices-based container native storage system that efficiently separates data and control paths would provide scaling in multiple axes — capacity, bandwidth, IOPs — to allow for capacity and performance to scale up or down as required.  The impact of scaling down resources should not be underestimated, since this level of infrastructure resource flexibility can allow resources to be efficiently shared across applications;
  • Resiliency: Since microservices can independently fail and restart, resiliency is improved in this type of design as well;
  • Data Management: Many data management operations can be carried out solely by the metadata microservice without affecting the data plane. In other cases, where data needs to be manipulated, operations on the metadata and data can be decoupled to minimize performance issues and increase efficiency;
  • Storage media support:  Since microservices are independent and use a well-defined protocol to communicate, such a system could implement multiple flavors of the data plane microservice, driving multiple media types;
  • Tiering: The metadata microservice could provide further functionality by controlling tiering operations between these media types, resulting in better cost structure and optimal data layout;
  • Data Mobility: Once data and metadata stores are separately maintained by discrete microservices, with multiple metadata entries potentially referring to a common data chunk, objects such as files or volumes can be virtualized as lightweight, metadata-only objects that refer to a common data pool potentially spanning different media types or even geographies. This brings about interesting data mobility capabilities for increased capability in hybrid cloud and multicloud deployment;
  • Storage Protocol and Application Support: With an application front end as a microservice, it too could be implemented in multiple flavors, supporting different storage access protocols or even application-specific access, delivering greater flexibility.

While a microservices-based container-native storage system could provide the flexibility, scalability and portability required by the applications and containers it supports, there are additional issues to consider. Maintaining strong consistency, for example, is extremely difficult for a distributed system that needs to deliver performance, while eventual consistency is not an option for many applications. While this is a difficult challenge, it is not impossible to solve along with other issues and this should not hinder the pursuit of a microservices based architecture.


Microservices and containers have evolved to deliver significant value to businesses today, and as more and more application are implemented as cloud native, the infrastructure supporting these applications needs to evolve as well. The flexibility and extensibility of a microservices-based approach to container-native storage can help create solutions that meets the needs of modern applications while also eliminating restrictions and limitations of infrastructure. To truly realize the potential for application modernization presented by containers, we need to remove the limitations of storage solutions and embrace a microservices approach.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.