When it comes to data for machine learning (ML) applications, often times a database system just doesn’t cut it. You need something bigger, like a data warehouse or data lake. There’s also an emerging class of specialist AI and big data platforms that are pitching something in-between a development platform and a data warehouse.
One such company is Databricks, which bills itself as a “unified platform for data and AI.” It offers large-scale data processing, analytics, data science and other services.
To find out more about Databricks’ strategy in the age of AI, I spoke with Clemens Mewald, the company’s director of product management, data science and machine learning. Mewald has an especially interesting background when it comes to AI data, having worked for four years on the Google Brain team building ML infrastructure for Google.
I started by asking Mewald how Databricks relates to modern database systems, such as Apache Cassandra and MongoDB?
He replied that Databricks is “database agnostic.” The company specializes in large scale data processing, he said, but the real key to its approach is the data lake theory.
A data lake is a repository of raw data stored in a variety of formats — anything from unstructured data like emails and PDFs, to structured data from a relational database. The term was coined in 2011, as a modern variation of the late-1980s concept of a data warehouse. A key difference: data lakes were designed to deal with the internet and its masses of unstructured data.
In a blog post from January, Databricks extended the data lake idea by coining a new term: the lakehouse. It was described as “a new paradigm that combines the best elements of data lakes and data warehouses.”
It should be noted that, unlike data warehouses, the data lake concept has not been universally accepted in the industry. Business Intelligence analyst Barry Devlin wrote in response to the Databricks post that “while often claimed to be an architecture, the data lake has never really matured beyond a marketing concept.” He wonders, “can the lakehouse do better?”
While “the lakehouse” might be contentious, Databricks does at least have a product that actually implements the theory: Delta Lake. It aims to ensure the reliability of data across data lakes at a massive scale; the technology was open sourced last April.
“A couple of years ago we built a product called Delta Lake,” Mewald told me, describing it as “both a storage format and a transaction layer.”
“It basically gives you similar capabilities of a data warehouse, on top of a data lake,” he continued, “and that’s why the way to think about Databricks is, we are database agnostic; you can ingest data into Databricks and into a delta lake, from any data source. So, let’s say from Cassandra or MongoDB. And then we provide you with this optimized format, an optimized query engine, and transactional guarantees for querying that data for all kinds of use cases and applications.”
Machine learning is another key part of Databricks’ offering. The company claims that it “streamlines ML development, from data preparation to model training and deployment, at scale.” MLflow is an open source framework that Databricks released to help with this. Databricks provides a managed version of MLflow in its platform (Janakiram MSV profiled MLflow last year for The New Stack, and also wrote a tutorial for it).
I was curious about Mewald’s background at Google, which is known as a pioneer in applying ML to consumer apps – like Gmail, ad personalization, Google Assistant, and YouTube video recommendations. What did he learn there about how ML is being used in modern applications?
Mewald replied that he got to “see any and all applications of machine learning” while working at Google. However, he thinks other companies have now caught up to Google in terms of applying ML — including, not surprisingly, his current employer.
“What I find really exciting about Databricks is that I actually now see the exact same diversity of use cases with Databricks customers. It’s actually a myth that a company like Google is way, way, way ahead in terms of ML applications.”
The developer experience, though, is only getting more complicated — thanks to distributed computing, Kubernetes, DevOps and other currently popular cloud native technologies. Adding machine learning to a developer’s plate only increases the complexity they have to deal with. So I asked Mewald what his advice is to developers, when it comes to integrating ML into their apps?
He first noted that “machine learning really is a paradigm shift in how we think about developing.”
“In software,” he continued, “you write code, you write a unit test, and it behaves the same way every time you run it. In machine learning, you write code and there’s this data dependency; and every time you train your machine learning model, it will behave differently because it’s inherently stochastic and the data changes. [So] it’s not as deterministic.“
The problem, Mewald said, is that a lot of developers are using older software engineering tools — some of them created “decades ago” — for ML. So he advises developers tackling ML today to choose “modern developer tools” such as MLflow.
My final question for Mewald was a speculative one. It still seems very early for machine learning, particularly from an application perspective, so what does he think the key challenges will be over the next few years as ML matures?
“Machine learning is where data engineering was 10 years ago,” he replied. “Like, ten years ago if you asked someone to write a program to crunch through terabytes of data, it was a big deal — there were just a handful of people on the planet who could do that.”
Today though, the same task can be done using a tool like Databricks. Or as Mewald put it, you input “a Spark SQL query and it just magically works.”
But ML is still at that awkward stage, where there is a lot of manual work to it and specialist knowledge is required.
“In most cases, when we build machine learning models today it’s a one-off,” he explained. “It’s this like stitched together thing, and maybe it works and they can just get it over the line and then you’re done — but it’s not maintainable and not repeatable.”
So, much like the transition data engineering went through, ML will have to become much more accessible for more people. To achieve that, the tools need to become easier to use. Maybe to the point, Mewald added, where “anyone who can write a SQL query can do machine learning.”
Perhaps by then, the lakehouse concept will have been proven out too — but time will tell whether the industry adopts it.
Feature image via Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: firstname.lastname@example.org.