The Future of Data Integration

Data integration has significantly evolved since data became centralized in data warehouses and lakes. ELT (Extract-Load-Transform) has replaced ETL (Extract-Transform-Load) by making analysts autonomous at getting access to the data they need in those warehouses.
New reverse-ETL solutions emerged enabling operational teams to access consolidated data from the same warehouses. The industry refers to the new standard as the modern data stack. And while 30% of the most valuable companies — such as Amazon, Apple, Facebook and Microsoft — were taking a data-driven approach in 2008, that number grew to 70%.
The issue with “modern” is that today’s modern becomes tomorrow’s outdated. And that leads to the question: how will data integration evolve given the indications we already have? In this article, I will explore the challenges companies will face and what they should do to overcome them.
Current ELT Challenges
Inability to Address All Connector Needs
Let’s begin by looking at the issues companies face today, starting with ELT and reverse-ETL. There are two significant reasons why companies continue building and maintaining more and more connectors in-house.
- Long-tail needs: ELT solutions are unable to keep up with the number of tools that companies are using internally. ELT solutions all plateau at around 150–200 data connectors. The reason is simple: the hard part about data integration is not building the connectors, but maintaining them. Any cloud-based closed-source solution will be restricted by ROI (return on investment). It isn’t profitable for them to support the long tail of connectors, so they only focus on the most popular integrations.
- Custom needs: Companies all have different needs in terms of data for the same tools they use. If by any chance, an ELT solution is missing one API stream for a tool, there is no other choice but to build and maintain it themselves.
This is true for both ELT and reverse-ETL.
Missing Quality and Data Lineage on Reverse-ETL
In addition to not addressing the long-tail and custom needs, there is an additional issue that reverse-ETL solutions are facing. When syncing data back to the operational tools, the teams need to know where that data comes from (also named data lineage) and how correct that data is. Not having these types of insights and validation constitutes an operational risk that will cause you to send emails to the wrong person or take incorrect actions that could hurt your business. Unfortunately, reverse-ETL solutions don’t have access to that data. Data teams need to add that information manually to their sync which requires even more data engineering work.
How Today’s Problems Will Get Solved
Consolidation of ELT and Reverse-ETL
Companies should consider merging their ELT and reverse-ETL processes using a single tool. The technological differences between ELT and reverse-ETL are small enough that this consolidation is predictable. Some ELT solutions, including Airbyte, have already announced it in their product roadmap. There are several important benefits to this consolidation:
- Doing ELT enables companies to have data lineage at the reverse-ETL level. Teams will know where the data comes from if the same tool handles ELT, reducing data engineering efforts.
- It’s easier to monitor all your pipelines within one platform, also reducing data engineering efforts.
The Advent of Open Source
It is very challenging for ELT vendors to address the long tail or custom needs every company has. I believe that the only way to address them is through open source with an active contributor community all engaged to publish their own connectors for the benefit of all. This is how an ELT platform can rapidly get from 200 to thousands of connectors.
Closed source can’t address the long tail or custom needs every company has. I believe that the only way for companies to address the issue is through open source. Engineering teams should adopt an ELT platform that uses open source and has an active contributor community engaged in publishing their own connectors for the benefit of all. Using an open-source-based ELT solution can bring challenges, such as maintaining a high level of reliability across all those connectors. There are two ways this can be solved:
- By abstracting everything that is not specific to the tool of the integration. Indeed, data connectors are 90% identical, and only the remaining 10% are specific to the source they are connected to.
- By incentivizing the community to maintain their contributed connectors through financial or recognition incentives, like a marketplace for connectors where individuals and companies could publish their connectors, much like an App Store.
A New Operating System of Data Pipelines
Warehouses (Snowflake, BigQuery, Redshift, etc.) are fast becoming the operating system for data, the place where all operations are performed on companies’ data, along with an ecosystem of data apps built on top of it.
The same concept can be applied to data integration. With a platform that can do the following:
- ELT and reverse-ETL with open source connectors that you can customize at will;
- An active data engineering community incentivized to maintain the long tail of connectors;
- Data lineage;
- Observability across all data pipelines;
- Integration with other tools of the company data stack to ensure interoperability
This platform — which I call the operating system of data pipelines — is the future of the current modern data stack. This is what all data teams are striving for. They want a platform that does it all while providing the customizability they need to address all their needs.
Conclusion
With companies’ data creation experiencing a growth rate of 23% until 2025, having an efficient operating system of data pipelines has never been so important. And it’s worth it: data-driven organizations’ revenues are 70% higher than their counterparts. It’s time for organizations to overcome those ELT challenges by adopting an operating system of data pipelines.