AWS Re:Invent Updates: Apache Spark, Redshift and DocumentDB
LAS VEGAS — Day three of Amazon Web Services‘ re:Invent 2022 user conference saw a plethora of new data product releases, many unveiled by Swami Sivasubramanian, vice president of AWS data and machine learning, during his keynote.
The theme for this talk was the human brain and the commonality between data science and neuroscience. In short, people want data environments to perform the same functionality as the human brain in terms of connection, visualization, understanding, and likely ideas. AWS wants the world to know: AWS engineers are working on it.
The first portion of the keynote focused on launching and showcasing new products and features as part of four key concepts: tools for every workload, performance at scale, removing the heavy lifting, and reliability and scalability.
Here are some of the products:
Amazon Athena for Apache Spark
“Our Athena customers told us they want to perform this kind of complex analysis using Apache Spark, but they didn’t want to deal with all the infrastructure set up by keeping all the clusters for interactive analysis,” Sivasubramanian said. The solution AWS engineers came up with to solve this problem was apply Amazon Athena interactive query services to Apache Spark.
In an accompanying blog post, AWS noted that “with this feature, we can run Apache Spark workloads, use Jupyter Notebook as the interface to perform data processing on Athena, and programmatically interact with Spark applications using Athena APIs. We can start Apache Spark in under a second without having to manually provision the infrastructure.”
The blog post also gives a preview and detailed instructions on how to start using the product right away.
Amazon Redshift Integration for Apache Spark
Amazon’s Redshift data warehouse service was definitely a heavy hitter among new-feature announcements this week, and it also got some integration with the Apache Spark big data processing software. “This integration enables EMR applications can access Redshift data to run up to 10x faster, compared to existing Redshift spark collectors,” Sivasubramanian said.
The blog post written by Channy Yun, a principal developer advocate at AWS, offers more details on the product, which makes the process for building and running Spark applications on Amazon Redshift and Redshift Serverless “easy.” Adding this feature can potentially open data warehouses to a broader set of AWS analytics and machine learning (ML) solutions.
Amazon Redshift Integration for Apache Spark is built on an open source connector project.
Amazon DocumentDB Elastic Clusters
While announcing the news about Amazon DocumentDB Elastic Clusters, Sivasubramanian promised the audience, “This will save developers months of time for building and configuring all these custom scaling solutions. I’m proud to share this new capability that you have today.”
Elastic Clusters now adds scaling to AWS’s DocumentDB offering. A blog post written by Veliswa Boya, a senior developer advocate at AWS, notes that users can scale to “virtually any number of writes and reads, with petabytes of storage capacity.” Elastic Clusters automatically manage the underlying infrastructure and remove the need to create, remove, upgrade or scale instances.
Elastic Clusters use sharing to partition data and integrate with other AWS services in the same way Amazon DocumentDB does today. The blog post has more details about how to get started with Elastic Clusters.
AWS has several other launches listed on its blog as well. Watch the entire keynote here: