Red Hat has rolled out Container-Native Storage 3.6 as part of its efforts to offer a comprehensive container stack, following up on the release of the Red Hat OpenShift Container Platform 3.6 in August.
“The key piece we’re trying to solve with container-native storage is for storage to become invisible eventually. We want developers to have enough control over storage where they’re not waiting for storage admins to carve out storage for their applications. They’re able to request and provision storage dynamically and automatically,” said Irshad Raihan, Red Hat senior manager of product marketing.
Container-Native Storage 3.6 is especially suited for hybrid cloud environments because it can be deployed both on-premises and in public clouds, and the completed container stack provides a single control plane, cost savings and a more streamlined experience, according to the company. It also enhances application portability.
Among the new features:
- Support for file, block, and object interfaces. The addition of block storage (via iSCSI) provides support for distributed databases and other low-latency workloads like Elasticsearch. Object storage is a technology preview. for customers seeking an AWS-like experience.
- Support for the core container platform components: registry, logging, and metrics. Storage administrators won’t need multiple storage systems; the single integrated platform provides simplified management, procurement, and support.
- Increased persistent volume density in the number of applications and microservices deployed on a single storage cluster.
In the move to smaller and smaller microservices, developers don’t want to have to deal with performance bottlenecks, degradation and issues like that, Raihan said. So there’s support for about 1,000 persistent volumes per cluster, which gives the developer more flexibility in the way they can seamlessly scale without hitting performance bottlenecks.
Red Hat also is offering customers a test drive of container-native storage on OpenShift Container Platform. Using a multi-node cluster, running in the cloud, customers can explore lab exercises designed to expose them to different administrative and operational tasks.
Dynamic provisioning of storage was one of the first things Red Hat offered in container-native storage. OpenShift admins can define service tiers by workload, by latency type and other factors, creating storage classes easily translatable for developers, Raihan said.
Storage of the Future
Despite claims otherwise, persistent storage will remain a roadblock for containers for the foreseeable future, James Bottomley, an IBM container evangelist, told those attending the Linux Foundation Vault storage conference earlier this year.
The problem, he said, is the way Linux user namespaces work, and the difficulty in reconciling the file system user ID (fsuid), used by external storage systems, with the user IDs (uids) created within containers.
Object storage will be the future for cloud-native applications, Andrew Boag, managing director of New Zealand open source systems provider Catalyst IT Limited, projected during a presentation on clustered file systems at last year’s OpenStack Summit in Barcelona.
One approach when the amount of data cannot be effectively managed in current setups is to spread the data across multiple servers all within a single namespace, by way of clustered or distributed file systems.
“You would set up a single namespace; it could be multiple volumes. You can set up a multi-cluster environment as well. You would operate within a single namespace within the OpenShift cluster itself. You could have OpenShift pods that are storage and apps — you could have pods serving up storage from the same pods as applications.
“That’s the vision of where we’re headed, as storage-as-a-service — it appears as just another application sitting above the Linux namespace,” he said.
However, customers have been asking for object storage, he said. No date has been set for object storage to go GA.
Red Hat is a sponsor of The New Stack.
Feature image via Pixabay.