Researchers Use Machine Learning to Supercharge Data Retrieval
If you’ve ever shopped online, searched an online database, compressed a file, or signed a digital document, then chances are you’ve used something called hashing. Hashing is used in a wide number of applications, and can be found in blockchain technology, cryptography and image processing.
Now an international team of researchers from MIT, Harvard University and the Technical University of Munich (TUM) have found a new way to use machine learning to accelerate hash functions for retrieving data from large databases. More specifically, they’ve found a way to get around slow data retrieval speeds stemming from what is called a hash collision.
In order to retrieve data from big databases, a hash function is used to mathematically transform any given key or string of characters into a compressed set of representative values, called a hash value. This function also determines where this data is stored, by generating code that will point to its location.
But there are limitations to this system, because conventional hash functions will produce the code for retrieving the data randomly, and sometimes the hash function will generate the same hash code for different pieces of data, resulting in a collision. This often means reduced performance, as it takes much longer to retrieve the data that one needs.
A number of techniques have been developed to handle such incidents to prevent collisions, such as a class of hash functions known as perfect hash functions. However, perfect hash functions have to be customized for each dataset, and can significantly increase computation time.
To address this issue, the researchers developed what they call a learned model, by running a machine-learning algorithm on an experimental dataset so that the model could evaluate it for certain characteristics. They discovered that this AI-assisted approach improved computational efficiency and reduced the likelihood of collisions, compared to conventional hashing processes.
“At one end, traditional hash functions are fast to compute, but suffer from collisions that can reduce query performance,” wrote the team in their paper, which was presented at the 2023 International Conference on Very Large Databases.
“On the other hand, perfect hash functions avoid collisions, but are difficult to construct, and are not scalable, in the sense that the size of the function representation grows with the size of the input data. As an alternative, learned models can potentially provide a better trade-off between computation and collisions.”
To create their learned model, the team employed a machine learning algorithm to estimate how data was distributed in their sample dataset. Data distribution shows all the possible values within a dataset, and how frequently each value pops up. By knowing what the shape of the data distribution looks like, one can determine the probability of where a certain value will appear in that dataset. Machine learning can speed this process up, as it can more quickly predict where a key is located in the dataset.
The team’s experiments demonstrated that compared to traditional hash functions, learned models could reduce the likelihood of hash collisions from 30 percent to 15 percent. Additionally, learned models reduced the computation time by almost 30 percent, and are easier to train and operate compared to perfect hash functions.
Nevertheless, there are limitations, as the team also noted that if the data distribution is spaced too far apart, using the learned model could actually lead to more hash collisions. In addition, the team investigated the impact of varying the make-up of the learned model by using a combination of different linear sub-models — like recursive model indexes and radix spline indexes — to approximate the data distribution. Utilizing these smaller sub-models increased accuracy, but also increased the time it took to fetch the data.
“At a certain threshold of sub-models, you get enough information to build the approximation that you need for the hash function. But after that, it won’t lead to more improvement in collision reduction,” explained the study’s co-author and MIT CSAIL postdoctoral researcher Ibrahim Sabek in a press release.
The team envisions that their research on learned models can help other experts improve hash functions for other categories of information. Moreover, the team hopes to further examine how to adapt learned models to incorporate features for dynamic databases that would have data being inserted or deleted, without compromising accuracy.
“We want to encourage the community to use machine learning inside more fundamental data structures and algorithms,” said Sabek. “Any kind of core data structure presents us with an opportunity to use machine learning to capture data properties and get better performance. There is still a lot we can explore.”