How a Data Fabric Gets Snow Tires to a Store When You Need Them
What if there were glut of snow tires in Las Vegas in August at the same time dealers in Maine were trying to ensure they would be well-stocked for the upcoming winter season?
Making sure stores have the right inventory at the right time is the crux of demand planning, though the supply chain issues of the past several years has made it even more difficult. The problem eventually came to a head with American Tire Distributors (ATD), which found it impossible to get an end-to-end view of the flow of goods and to identify bottlenecks and inefficiencies in the supply chain.
“We were losing sales because the store owners were unable to answer the customers’ questions as to when exactly they would have the product in stock,” said Ehrar Jameel, director of data and analytics at ATD. The company didn’t want frustrated customers looking elsewhere.
So he wanted to create what he called a “supply chain control tower” for data just like the ones at the airport.
“At any given point of time, [they know] where the plane is, from where, what is its altitude, what’s the speed, when it lands, what’s the manifest — all those details,” he said.
“I wanted to give a single vision, a single pane of glass for the business, to just put in a SKU number and be able to see where that product is in the whole supply chain —not just the supply chain, but in the whole value chain of the company.
“[With a map of the United States] I wanted to be able to say, ‘Order X number is waiting on in this dock to be moved from that shipping yard or the dock to the mixing center. … And so, in the whole supply chain, where is that product? And when do you anticipate that to it will come to the store so the store can actually make the sale?”
That meant bringing together data from multiple sources, both within and outside the company, including transportation management systems, warehousing systems and other logistics systems. They included GT Nexus, IBM IMS, Oracle EBS and more.
So ATD turned to Promethium, which provides a virtual data platform automating data management and governance across a distributed architecture with a combination of data fabric and self-service analytics capabilities.
ATD started by focusing on visibility into the timing aspect of the supply chain.
“People change tires for winter, right? when the winter comes in, especially for Canada and all the northern areas. So from a timing point of view, it was very, very important for us to build a product and not wait until we have the whole data in place,” Jameel said.
With IT constraints, business teams were able to harness Promethium’s automation to create the business catalog and a common language so that anyone within the company can pull the data they need. That part has been live for eight or nine months, with the company looking to have all its systems integrated by summer.
Eliminate Waiting Weeks or Months for Answers
CEO Kaycee Lai, previously president of data catalog company Waterline Data, founded Promethium in 2018. He said that in the 20 years since he began as a data analyst himself, the problems have only grown worse.
“It takes a lot of time, people and effort just for an organization to actually use their data. Between the data and a person consuming it, there’s so many layers and so many tools. And no one person has access to all the tools,” he said. Meanwhile, the amount of data keeps growing.
“You turn around and there’s a new database, a new data warehouse and … the old paradigm of doing analytics is that you always copy data from your databases to your data warehouse, right? ETL, right? People have been doing that for 30 years.”
He said he became obsessed with the reasons companies wait weeks or months to get a question answered and by the time they do, the data is out of date.
“So with Promethium, we are really focused on [giving] you the fastest way to discover and find all your data instantly … and then make it easy,” he said.
It aims to enable business users to use data without having to have a deep technical understanding of the data infrastructure or to wait for help from data engineers.
What Is a Data Fabric?
TNS contributor Scott Gnau described data mesh as offering distributed processing and governance at the point of data collection while data fabrics offer “a more integrated paradigm wherein processing is pushed to where the data resides while distributed, mission-critical data stores are purposefully woven and integrated through machine learning and automation.”
An enterprise data fabric, he noted, combines data management technologies including database management, data integration, data transformation, pipelining, API management and more.
A Promethium blog post states:
Just as the ‘web’ refers not to a single software platform or piece of hardware but rather a layer of connectivity, so to the data ‘fabric’ refers to the connecting of many pieces of data-related software and hardware into a unified system. A data fabric integrates data that is connected to it through all standard data delivery methods, including streaming, ETL [extract, transform load], replication, messaging, virtualization or microservices, and connects repositories that might range from relational and NoSQL databases to data warehouses, data marts and even data lakes like Hadoop.
Lai maintains it would take six or seven other tools to provide what Promethium does. It provides cataloging, search, governance, data prep, virtualization, integration and visualization without the need to move or copy data at all.
Using Natural Language
Users can simply type a question and Promethium will automatically find the best-fit data, prepare the data and build the SQL statement automatically without requiring any coding.
It’s built on top of the open source SQL query engine Presto, which allows users to query data wherever it resides. It normalizes the data for query into an ANSI-compliant standard syntax, whether it comes from Oracle, Google BigQuery, Snowflake or wherever. It integrates with other business intelligence tools such as Tableau and can be used to create data pipelines.
It uses natural language processing and artificial intelligence plus something it calls a “reasoner” to figure out, based on what you asked, what you’re really trying to do and the best data to answer that question.
That’s something ChatGPT can’t do, he maintains. If you have three files called “Revenue,” which is the right one? Promethium learns relevancy based on context, such as how often a file is used. And it allows users to tweak their queries as they get a new idea, rather than having to resubmit a query and wait months for an answer from data that’s probably too old to be relevant, he said.
Its Storyteller Engine generates data visualizations as well as explanations in plain language.
The Menlo Park, California-based company, not to be confused with a quantum chemistry platform with the same name, has raised $34.5 million, most recently a $26 million Series A led by Insight Partners in February 2022. Insight Partners also owns The New Stack.
Its customers include Hostess Brands, South Korean food company CJ CheliJedang Corp., HealthEquity and eHealth.