Serverless / Storage

Red Hat Brings Simplicity, Serverless to Ceph Storage 4

17 Mar 2020 9:10am, by

Last week, Red Hat introduced the latest version of Red Hat Ceph Storage, which it says will address some of the complexity issues the open source file system has grappled with in the past, while also further improving scalability and offering support for serverless operations.

“What we’re doing with Red Hat Ceph Storage 4 is we’re taking Ceph from the world where you needed to have a Ph.D. to be able to operate and even install and set up Ceph effectively. This is a consistent thing that we hear from our customers. We’re improving the overall manageability and the installation experience,” said Pete Brey, marketing manager of hybrid cloud storage at Red Hat.

Red Hat Ceph Storage is built on Ceph, an open source petabyte-scale network file system for object, block and file storage. Unfortunately, while Ceph may handle extreme scalability issues, it is also known for its complexity. In this latest release, however, Red Hat Ceph Storage 4 can be installed via Ansible scripts in a matter of minutes, as compared to hours or days before, Brey said.

Brey says the newfound simplicity, however, is just the beginning, and that Red Hat Ceph Storage 4 also takes aim at its core value — massive scalability.

“One of our key value propositions here is around simplicity, but that’s not actually the most important value proposition. The most important is scalability,” said Brey. “It’s not just capacity scaling, it’s also the ability to scale performance as you scale capacity. These two things are sometimes at odds with one another; as you scale a system, particularly with traditional architectures performance typically tails off.”

Brey offered the example of a company with five storage administrators handling 20PB of data and posed that scaling up to 100PB, for some systems, might mean scaling linearly your employees as well. Red Hat Ceph Storage 4, he said, instead allows for a delegation of tasks that keeps you from having to increase staff to handle growing data storage sizes. The same staff can handle 20PB, or 100PB.

“We’ve also worked on simplifying to the degree that you can take a more senior administrator and let them delegate. Some of the more mundane tasks, like replacing drives or maintaining volumes — some of the everyday, day-to-day types of operations, you can now delegate to a more junior administrator. Again, we’re trying to take Ceph from a realm of being very specialized, targeted towards a specialized environment to a much broader audience,” said Brey.

Among the other new features in Red Hat Ceph Storage 4 are a unified dashboard to help more quickly surface and resolve problems, a quality of service monitoring feature for applications in a multi-tenant hosted cloud environment, and the introduction of integrated bucket notifications to support Kubernetes-native serverless architectures, which enable automated data pipelines via containerized serverless platform Knative and Red Hat’s version of distributed messaging streams platform Kafka, AMQ Streams.

On this last point, Brey explained that any changes to a data bucket, whether adding an object, deleting an object, or otherwise changing an object, the system can automatically trigger downstream processes based upon what happened, using the Amazon S3 API. A doctor’s office, for example, could automatically analyze a new X-ray when it is dropped into a bucket, and could similarly automate a process for removing personally identifiable information in order for that same data to be later used in research.

“You can do pretty much anything you want in the serverless processes,” said Brey. “We think that this is actually a pretty big deal. It’s like an infrastructure that we could layer on top a lot of different types of workloads.”

Red Hat is a sponsor of The New Stack.

Feature image by ArtTower from Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.