Favorite Social Media Timesink
When you take a break from work, where are you going?
Video clips on TikTok/YouTube
X, Bluesky, Mastodon et al...
Web surfing
I do not get distracted by petty amusements
Cloud Native Ecosystem / Kubernetes / Microservices

How Persistent Storage Offers Cloud Native Developers More Freedom

Thanks to agile practices and a brave new world of cloud native infrastructures, developer teams can deploy code several times per day — compared to maybe once every several months — or, in many cases, even longer. 
Mar 18th, 2020 8:03am by
Featued image for: How Persistent Storage Offers Cloud Native Developers More Freedom

Red Hat sponsored this post.

Michael St-Jean
Michael is a principal marketing manager at Red Hat focused on data solutions for the hybrid cloud. He brings expertise in enterprise storage with experience in global alliances, corporate and technical marketing, product management and technical training. With extensive experience in the industry, Michael delivers real-world solutions for many of today's emerging data-driven workloads.

A growing trend for many organizations is for DevOps teams to underpin business goals and strategies. In this shift away from a more transactional and operational approach towards a more strategic software development focus, development teams are playing a key role in differentiating service offerings or even disrupting, and ultimately, transforming their industries.

Consequently, application architects are less concerned with large scale workflows encapsulated in monolithic applications. The question today instead typically revolves around what DevOps teams must do to achieve desired levels of agility by using cloud native platforms at scale to deploy software at cadences that were unheard of not that long ago.

Thanks to agile practices and a brave new world of cloud native infrastructures, developer teams can deploy code several times per day — compared to maybe once every several months — or, in many cases, even longer.

The Benefits of Cloud Native Application Development

As opposed to applications developed using traditional, monolithic application development practices, cloud native applications, thanks to their versatility, can be much smaller, more agile and easily integrated with other applications and services. Many developers can also work on applications or services that are part of a broader ecosystem.

The goal is for deployments to be continually rapid and robust. The ultimate test is if these agile deployments meet the needs of end-users at scale and better than competitive offerings. Unforeseen issues can occur in the mad rush to deploy rapidly and consistently in stateless container environments; therefore, organizations must prioritize efficiently managing and scaling data and networking in a cloud native world.

Managing Persistent Storage in Containers

A prominent stumbling block I often find that organizations face when making the cloud native shift is how to manage data for stateful applications in ephemeral container environments.

When developing and deploying software for cloud native architectures, developers must remain aware of how the code they create and distribute will interact across an organization’s operations. Containers and microservices offer developers incredible versatility for deployment. They can instantaneously scale up and down code deployments thanks to the statelessness of their underlying architecture. However, when it comes to data placement, maintaining data persistence, stability and security can pose challenges, particularly as application architects use code and microservices that potentially exist in multiple locations.

In the burgeoning DevOps environments that initially used containers, a simple strategy might have been to attach a Network File System (NFS) for their CI/CD pipelines, Git repositories or applications. Still, as we will see below, data portability, resilience and dynamic provisioning/deprovisioning can make this route cumbersome and substandard. Similar issues can arise by using proprietary cloud storage infrastructures that are not shareable and have potential points of failure and data security.

In short, having a persistent storage layer in place before your cloud native journey begins can save organizations headaches and backtracking down the road. We explain more about how that can and should work in the next section.

How Persistent Storage Should Work

One way to solve the persistent storage conundrum for application development and deployments in stateless and often diverse environments is to adopt a storage layer that integrates with your container platform.

When developers work with a Kubernetes orchestrator that makes it easier for them to create their resources for a project, the persistent storage layer should ideally consist of a dynamic storage platform. Developers should have confidence that the storage layer also adheres to their data security and resilience requirements for application deployments.

With a viable software-defined storage platform, developer teams can define and adjust their data requirements for a project on the fly, as opposed to if they had to complete this process manually using an NFS mount, for example. And they don’t need to rely on storage administrators to provision storage on their behalf; they can change their storage configurations as needed.

Likewise, for applications storing data in a block protocol, such as SQL or NoSQL databases, some organizations may be tempted to adopt a service provider’s proprietary solution. However, this option limits the ability to have storage availability across different multicloud or regional zones and can locks developers into a single provider’s solutions.

Open source software-defined storage allows for persistent and portable storage across many different kinds of infrastructures, including bare metal, virtual machines (VMs) and public and private cloud environments. Data federation can take place across hybrid and multicloud environments, so developers can place sensitive data where it needs to be, and integrate applications and microservices from various multicloud deployments.

For large-scale analytics applications such as artificial intelligence (AI) and machine learning (ML) workloads, data scientists must often manage huge increments of data from multiple locations and devices. Another example is from edge devices and IoT sources. Data aggregation and dissemination from the device edge, to remote staging to core systems must be delivered seamlessly across the data lifecycle. Often, different storage protocols, from object to block to file for different types of events, are required. The persistent storage layer capabilities must be available to handle these very dynamic and diverse storage requirements.

Ultimately, developer teams must be able to rely on a standardized platform to automate storage management across often diverse and demanding environments — including multicloud, bare metal, and VMs — through a single API. The storage layer should also offer distinct failover advantages for when developers need to scale back or redeploy on an as-needed basis. It also needs to be very agile, so developers can define what they need by provisioning with a near-zero delay.

Cloud native persistent storage offers many of these capabilities and provides significant flexibility and portability for DevOps teams. It can lend agility to software deployments in cloud native environments while empowering developers with the freedom to manage their own storage needs.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.