Multiple Vendors Make Data and Analytics Ubiquitous
In the last few weeks, there has been a glut of news from many of the major players, new and old, in the analytics space. Together, they point to some important trends in the industry as it wraps up its first quarter of the 2023 calendar year. As complicated as the analytics landscape can be, it’s starting to feel like important ideas and standards are asserting themselves and gaining widespread adoption.
The increased criticality and standardization of data lake technology, the continuing importance of AI and machine learning, additional momentum for doing analytics in the cloud, the ongoing relevance of data integration, and embedding of analytics technologies into mainstream productivity and developer tools, all play a role in the news. So, let’s take a look at announcements from eight different vendors, over just the last few weeks, and analyze what, when taken together, they mean for the industry.
Tip of the Iceberg
To illustrate these trends, let’s start in the world of data lakes and lakehouses, where the open source Apache Parquet file format, and its derivatives, like Apache Iceberg and Delta Lake, continue to gain momentum. At its Subsurface event on March 1, data lake/lakehouse player Dremio announced a number of enhancements to its support for the Iceberg table format. These include the ability to copy data into Iceberg tables using a newly supported SQL command, COPY INTO; support for consolidating multiple files into one, using the new OPTIMIZE command in Dremio Sonar (which will also now federate across more data sources); and the addition of a new ROLLBACK command to return a table to a previous specific time or snapshot ID. All of these features would seem to bring Iceberg to parity with similar features in the competing Delta Lake format, originally developed by Databricks, but now an open source technology governed under the auspices of The Linux Foundation.
As I mentioned, both Delta Lake and Iceberg are essentially derivatives of the Parquet format (though Iceberg can technically bring its capabilities to other formats, as well) which only demonstrates how important Parquet has become in the data lake world. But it looks like it’s becoming important in the graph database world, too. Graph database contender TigerGraph announced, also on March 1, that it is adding to its support for Parquet generally, and providing the ability to ingest data in the format. TigerGraph is also adding collaborative editing and viewing capabilities on shared visual graph dashboards, and the company is enhancing its graph data science package, providing better graph embedding via NodePiece, and adding support for its own packaged algorithms via pyTigerGraph.
Coincidentally, TigerGraph published a benchmark just this week, apparently under the auspices of the VLDB Endowment, focused on scale in graph analytics, and business intelligence on graph-structured data. In the benchmark, TigerGraph took on a 108-terabyte workload in an AWS EC2 deployment that, according to the company, processed OLAP-style queries on a graph containing 217.9 billion vertices and 1.6 trillion edges. TigerGraph says the benchmark’s 108TB data volumes are “3x the previous world record.” While it’s always wise to consider all benchmarks with healthy skepticism, what’s clear here is that graph technology is taking on increasingly large data volumes and is being used for analytics as well as operational workloads, all in the cloud.
And this intersection with graph data is hardly the only place where AI showed its prowess within the general analytics world this month. For example, Databricks announced a new machine learning model serving capability on March 7. The offering is specifically designed to integrate ML model creation, maintenance and serving within the context of the mainstream analytics performed on the Databricks Lakehouse Platform. Not only does it take care of model deployment and batch scoring/inferencing, but it also sets up API endpoints necessary to make real-time interactive scoring work easily, including for streaming data scenarios. Databricks ML serving also integrates with technologies that have been part of the Databricks platform for some time: the Unity Catalog and Feature Store (performing feature lookups automatically at inference time), as well as MLflow experiment management.
And speaking of Databricks, it’s one of four important companies SAP announced on March 8 that it would be partnering with in the context of its Datasphere service, a revamped release of what was known as SAP Data Warehouse Cloud. Existing DWC customers will automatically see the new Datasphere capabilities, and no migration is required.
In addition to Databricks on the data lakehouse side, SAP is partnering with DataRobot for AI goodness, Confluent for more streaming data pizazz, and Collibra for data governance. The goal with these impressive cross-industry partnerships is, in SAP’s own words, “to enrich SAP Datasphere and allow organizations to create a unified data architecture that securely combines SAP and non-SAP data no matter where it is stored.” The partnerships work in both directions, too. With Databricks, for example, customers will be able to bring lakehouse data into Datasphere, and will also be able to bring SAP data (including data from ERP implementations, Concur and Ariba) into the Databricks environment. This is pretty enterprise-y stuff; accordingly, SAP is also partnering with a number of global systems integrators, including Accenture, Deloitte, Capgemini, EY, IBM, and PWC.
Data Integration in the Cloud: Pay as You Go, Merge Ahead
While the word “cloud” may have come out of SAP’s product name, the centrality of the cloud in analytics can’t be overstated. And just as veteran SAP is partnering with Collibra in the realm of data governance and management, another enterprise data management juggernaut, Informatica, is announcing new cloud initiatives of its own. On February 28, the company introduced its Cloud Data Integration (CDI) Free and CDI Paygo. CDI Free builds on the Data Loader product Informatica introduced last year, adding industrial strength data-integration capabilities from SAP’s classic stack. Usage is free up to 20M rows for ELT (extract, load and transform) or 10 processing hours for ETL (extract, transform and load), per month, whichever comes first. After that, CDI Paygo (as in “pay as you go”) allows customers to process more data, and be billed under a usage-based pricing model.
And Informatica wasn’t the only cloud data integration company making news in the last couple of weeks. On the same day that Informatica shared its news, another player in that space, Talend, announced it was adding AI-powered automation for cloud job management, improved data source connectivity, and additional data observability features for monitoring data quality over time. Recently, sister company Qlik announced it would be acquiring Talend. As both companies are owned by private equity company Thoma Bravo, it seems likely the deal will go through. Meanwhile, Qlik already has significant data integration technology in its portfolio, so we’ll have to wait and see how Talend’s newly announced capabilities will work into the mix.
Cloud Data and Data Marketplace
Next on the news roster is Rockset, a real-time analytics database based on the open source RocksDB project. Rockset can ingest both relational and streaming data, keep it in a proprietary store and then use an aggressive indexing strategy to take on a combination of data warehouse and data virtualization workloads. On March 1st, the company announced new workload isolation capabilities based on a multicluster architecture that facilitates isolating streaming data ingestion from low-latency query workloads, allowing each to be scaled independently and, according to the company, avoid the need for multiple database replicas. Rockset describes itself as cloud native, adding itself to the roster of vendors who increasingly see the cloud and analytics as permanently comingled.
Of course, analytics in the cloud can benefit greatly from cloud-based external data feeds, for the purposes of data enrichment. That’s why Alation, a well-known data catalog provider, announced the launch of Alation Data Marketplaces, back on February 22. Beyond data governance, Alation’s take on data catalogs has always been to make data discoverable, accessible and, in a sense, peer-reviewed (within the enterprise). That same ethos seems to have led to the introduction of Data Marketplaces, so that external data can be as accessible as corporate data.
Microsoft Add-Ins Galore
Another way to make data more accessible is to make it available outside core data catalog and analytics interfaces, and inside other applications. That’s what’s behind Alation’s additional announcements of Microsoft Teams support in Alation Anywhere, which now makes data sets discoverable and queryable in Microsoft Teams chat (joining the preexisting support for Slack and Tableau). There’s also Alation Connected Sheets, which now makes data in the catalog accessible from Microsoft Excel, in addition to the previously-supported Google Sheets. The integration is tight and quite valuable — you can see a demo of the Teams/Alation Anywhere technology here. Alation also shared a demo of Connected Sheets on Google Sheets with me, and it was indeed impressive.
Finally, Teams and Excel aren’t the only Microsoft tools getting third-party analytics integration and Alation isn’t the only company doing it. As it turns out, Databricks is getting in on the game too. Since developers are a Databricks core constituency, the company decided to target Microsoft’s Visual Studio Code for its integration, creating an add-in for the wildly popular multiplatform (and free) developer tool. Essentially, the add-in makes VS Code a first-class client for Databricks, giving developers an option beyond the Databricks notebook interface for working with the data in their lakehouse, as well as the ML models they’ve built or are building.
What Does It All Mean?
Open source table formats are growing in popularity and adoption. Graph data is increasingly being used for analytics, in high-performance scenarios. Machine learning and streaming data are increasingly common, and more tightly integrated, in mainstream analytics environments. Behemoths like SAP are sharing more data, in more environments. Data integration is getting cheaper and easier. Enrichment data is more readily available and more easily blended with corporate data. It’s all happening in the cloud, and everyone gets to do analytics in their favorite tools, even if they are collaboration platforms like Slack or Teams, spreadsheets like Excel or Google Sheets, or developer tools like VS Code.
Analytics is becoming more cloud-oriented, more ubiquitous and more embedded, in platforms not focused on analytics exclusively, or even primarily. This means analytics is growing in adoption and deployment, but it’s also “disappearing,” as it burrows into tech platforms of all kinds. That may seem a paradox, but it’s actually quite logical: the most effective infrastructure works unobtrusively so that you don’t even know it’s there, letting you use it without needing to detour or plan in advance. That’s what’s happening with analytics today, and all of the news from Alation, Databricks, Dremio, Informatica, Rockset, SAP, Talend and TigerGraph bears it out.