An Introduction to Apache Parquet
As the amount of data being generated and stored for analysis grows at an increasing rate, developers are looking to optimize performance and reduce costs at every angle possible. At the petabyte scale, even marginal gains and optimizations can save companies millions of dollars in hardware costs when it comes to storing and processing their data.
One project that is an example of these optimization techniques is Apache Parquet. In this article, you will learn what Parquet is, how it works and some of the many companies and projects that are using Parquet as a critical component in their architecture.
What Is Apache Parquet?
Parquet is an open source column-oriented storage format developed by Twitter and Cloudera before being donated to the Apache Foundation. Parquet was designed to improve on Hadoop’s existing storage format in terms of various performance metrics like reducing the size of data on disk through compression and making reads faster for analytics queries.
Over time more projects and companies adopted Parquet, and it has become a common interchange format for projects that want to make it easier for users to import and export data.
Adopting Parquet makes it easier for new users to migrate or adopt new tools with minimal disruption to their workflow, so it benefits both the users and the companies that want to acquire new users for their product.
Apache Parquet Technical Breakdown
Parquet uses a number of innovative techniques to provide great performance. Before jumping into the details, we can look at the results compared to another file format used for storing data: the humble CSV (comma-separated values file).
Some numbers from Databricks show the following results when converting a 1 terabyte CSV file to Parquet:
- File size is reduced to 130 gigabyte, an 87% reduction
- Query run time is reduced from 236 seconds to 6.78 seconds, 34 times faster
- The amount of data scanned for a query drops from 1.15TB to 2.51GB, a 99% reduction
- Cost is reduced from $5.75 to $0.01, a 99.7% cost reduction
So what is the secret sauce that makes Parquet perform so much better than CSV and other file formats? Let’s look at some of the key concepts and features behind Parquet:
- Run-length and dictionary encoding — Rather than storing the same value on disk many times, effectively wasting space by storing the same value again and again, Parquet is able to simply list how many times that value appears within a column, which results in massive space savings on datasets where values are repeated frequently. An example of this would be something like CPU monitoring where every value is going to be within the 1-100 range for percentage utilization.
- Record shredding and assembly — Apache Parquet borrows a technique from Google’s Dremel paper, which allows Parquet to map nested data structures to a column-based layout. The benefit of this is that developers can still treat their data in a more natural nested style while getting the performance benefits of a column-based data structure.
- Rich Metadata — Under the hood Parquet keeps track of a large amount of metadata, which is needed to make the above strategies possible. Parquet data is broken into row groups, column chunks and pages. Files can contain several row groups and each of those row groups contain exactly one column chunk for each column. Each column chunk contains one or more pages of data. All of this complexity is abstracted away so developers don’t have to worry about it directly.
All of these features work together to give Parquet its performance characteristics. Boiling it down to the simplest level, it’s all about providing metadata to optimize queries to reduce computing resource requirements while also reducing the amount of repeated data points, which reduces storage costs. This results in faster queries and less need for storage.
Parquet is especially beneficial in the age of cloud computing where many cloud services charge based on the amount of data you are processing and scanning. Because Parquet keeps additional metadata about the structure of your data, it is able to dramatically reduce the amount of unnecessary data being scanned, so instead of paying to scan, process and analyze data that isn’t needed to fulfill a query, you only grab the data you need.
Companies and Projects using Apache Parquet
A number of projects support Parquet as a file format for importing and exporting data, as well as using Parquet internally for data storage. Here are a just a handful of them and what they can be used for:
- Hadoop is a big data processing tool based on Google’s MapReduce paper. Parquet was originally designed as a file format for working with Hadoop.
- Apache Iceberg is an attempt to bring the simplicity of normal relational database tables to work at big data scale by allowing stream processing tools like Spark, Trino and Flink to all query and process data from the same storage backend.
- Delta Lake is an open source storage framework for building a data lakehouse style architecture. Delta Lake integrates with common computing tools like Spark, PrestoDB, Flink and Trino to make processing and storing data easy.
- Apache Spark is an analytics engine for processing large amounts of data. It allows for data processing with SQL in addition to high-level tools like a Pandas API.
Apache Parquet and InfluxDB
InfluxDB time series database is another project that will be relying on Parquet heavily, specifically for InfluxDB’s new columnar storage engine called IOx. InfluxDB uses Parquet for persistent storage of data using object storage. This allows for data to be moved between hot and cold storage tiers efficiently to allow InfluxDB to give users better performance while also reducing their storage costs. It also provides better compression ratios on data compared to previous iterations of InfluxDB.
InfluxDB works with Parquet by mapping data sent to InfluxDB in line protocol format, then maps those tags, fields and timestamps that are defined in line protocol to columns in Parquet. These columns can then be compressed using the optimal compression algorithm based on the type of data for that field value. Parquet files are then split by time ranges so you are only grabbing the time series data you need with the minimal number of Parquet files accessed. When data is pulled from Parquet files, it is loaded into memory using the Apache Arrow format, which is also column-based so minimal performance overhead is incurred.
When working with data at massive scale every little bit of efficiency can result in major benefits for your company and users. Parquet is just one of many projects that are working to deliver better efficiency. While you may not interact with Parquet directly as a developer, there are pretty strong odds that some of the tools you work with on a regular basis are using Parquet under the hood.