Modal Title
Cloud Native Ecosystem / Data / Storage

How to Manage Metadata in a Highly Scalable System

Unless metadata scaling issues are adequately addressed, the systems that hold the metadata will eventually start experiencing problems that may affect business operations and performance.
May 24th, 2022 10:00am by
Featued image for: How to Manage Metadata in a Highly Scalable System
Feature image via Pixabay.

Adi Gelvan
Adi Gelvan is co-founder and CEO of Speedb, a data management startup that provides a drop-in replacement for RocksDB embedded storage engine. A former IT infrastructure manager with over two decades of management, commercialization and executive sales positions, Adi specializes in leading global software technology companies like Infinidat and SQream to outstanding growth. He holds a double academic degree in mathematics and computer science.

Metadata used to be of minor impact to data center architectures. Metadata is data about data that is tucked away somewhere to be retrieved and analyzed, with little impact to operations. As big data, AI, IoT and 5G applications scale, they amass so much metadata that the traditional relationship between data and metadata has been upended.

Ten years ago, the typical ratio between data and metadata was 1,000:1. For example, a data unit — either a file, block or object — that is 32K in size would have metadata of around 32 bytes. Existing data engines were able to handle these amounts of data quite effectively. Today, the ratio is often more like 1:10 when the object is small. Many organizations now find that their metadata exceeds the volume of data being stored, and the situation will only get worse as the amount of unstructured data continues to explode. 

Such a surge in metadata raises issues of where to store it, how to manage it effectively and most importantly, how to scale the underlying architecture to support fast-growing metadata volume along with rapidly scaling systems. Unless metadata scaling issues are adequately addressed, the systems that hold the metadata will eventually start experiencing problems that may affect business operations and performance.

Four Answers to Scaling Metadata

The traditional way of addressing scalability and performance by adding more compute resources and/or implementing various solutions to monitor and optimize the different layers of the IT stack cannot be effectively applied when dealing with metadata.

Organizations typically manage their metadata using key-value stores (KVS) like RocksDB that rely on a storage engine, also called a data engine, which is the part of the software stack that sorts and indexes data.

However existing data engines have inherent shortcomings such as limited capacity, high CPU utilization and significant memory consumption, which can’t be solved by simply adding more compute power. Usually, a series of operational tasks commence at this point — few of which present effective long-term solutions.

  1. Sharding — This process splits a dataset into logical pieces and runs multiple datasets simultaneously. It’s one way to deal with the metadata generated by highly scalable systems — at least in the short-term. However, with the growing amount of data flowing into the system, the initial sharding plan often breaks, taking on a life of its own where developers must constantly reshard, becoming an activity unto itself.
  2. Database tuning — Even with NoSQL databases that are flexible and efficient, developers often struggle with creating unusual configurations specifically for applications that experience performance issues that require tuning. However, these instances then experience additional, larger performance issues if workloads or underlying systems are changed. This can set up a seemingly endless loop of further retuning as applications grow in size and complexity — a lower-level drain on developer time.
  3. Data engine tuning — The basic operations of storage management are commonly executed by the data engine (aka storage engine). Installed as a software layer between the application and the storage layers, a data engine is an embedded key value store (KVS) that sorts and indexes data. In addition, the KVS is increasingly implemented as a software layer within the application to execute different on-the-fly activities on live data while in transit. This type of deployment is often aimed at managing metadata-intensive workloads and preventing metadata access bottlenecks that may lead to performance issues.
    Data engines are complex constructs, and organizations often find there is a skills gap with tuning and configuring the data engine under the hood of an application in line with specific performance and scalability requirements. Even skilled developers may struggle to achieve this.
  4. Adding resources — The time-honored answer to any performance issue is throwing additional storage resources at the problem. Often this proves to be a temporary fix with costs that can’t be sustained long term. 

New Options 

The realization that current data architectures can no longer support the needs of modern businesses is driving the need for new data engines designed from scratch to keep up with metadata growth. But as developers begin to look under the hood of the data engine, they are faced with the challenge of enabling greater scale without the usual impact of compromising storage performance, agility and cost-effectiveness. This calls for a new architecture to underpin a new generation of data engines that can effectively handle the tsunami of metadata and still make sure that applications can have fast access to metadata.

Next-generation data engines could be a key enabler of emerging use cases characterized by data-intensive workloads that require unprecedented levels of scale and performance. For example, implementing an appropriate data infrastructure to store and manage IoT data is critical for the success of smart city initiatives. This infrastructure must be scalable enough to handle the ever-increasing influx of metadata coming from traffic management, security, smart lighting, waste management and many other systems without sacrificing performance. This is particularly important for applications that are highly sensitive to response time and latency, e.g., traffic optimization and smart parking.

Metadata growth will continue to escalate as a data center concern that spans across a growing number and variety of data-intensive use cases. Recent moves to open the data engine to innovation provide options for teams focused on enabling applications to scale and grow.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.