Data / Serverless

Amazon Aurora Release Marks Shift to Database as Infrastructure

30 Jul 2015 9:00am, by

Showing even more signs of how much the DBaaS market is heating up, Amazon Web Services has this week released Amazon Aurora, a MySQL-compatible relational database engine specifically aimed at facilitating the scalability of applications and cloud-based Internet architecture.

Aurora was initially launched at Amazon’s signature re:Invent conference in November last year, but was only available in preview. At the time, customers, including MOOC provider Coursera, facial-recognition platform FacialNetwork, and European luxury retailer Kurt Geiger, sung the praise of the new database offering, which boasts automatic resizing of databases without disruption from a starting point of 10GB to 64TB.

Now those names have been added to fitness program Zumba, climate platform Earth Networks and download service WeTransfer, adding their weight to the power and scalability of Amazon Aurora.

Aurora allows customers to create up to 15 replicas of their database, and these can be spread across three geographical zones (with database volume replicated in 10GB chunks in six ways). Amazon’s Multi-AZ technology is used to automate restarting of any databases that failover, and a variety of tech continues to monitor for crashes, separate database buffer cache from database processing, and even constantly scan and heal data blocks within the DB storage.

In response to market demand, this year has seen most database providers jostling for ways then can offer lower latency, perform real-time transactional processing and manage scalability. Approaches being taken have included sharding and in-memory peer growth, addressing external factors affecting scalability and even rewriting a database core product completely to introduce new batch algorithms. So far, there have been no real losers: open source offering Redis has continued to expand, FoundationDB was bought up by Apple, and DBaaS offerings like Orchestrate and Compose have been acquired by industry giants CenturyLink and IBM.

It is these two latest acquisitions that set the context for Amazon’s Aurora release. With the surge in Internet architecture needs at a global scale, enterprise dev teams are looking for ways to decouple a need for expertise in database management from their application focus. Databases are continuing to move from an individual enterprise concern to be viewed more as an external service that just works. Hence, the growth in hosting providers buying up database products so they can offer a DBaaS with their cloud infrastructure.

However, the new Amazon Aurora release replicates some of the obstacles that Amazon already faces going forward as a cloud infrastructure provider. The security concerns of AWS Identity and Access Management (IAM) and the complexity intrinsic to the AWS Management Console that Joe Emison documented in his recent The New Stack article remain as key challenges.

Along with that is the trade that comes with more granular pricing models. On the one hand, customers are starting to see greater choice available in what they pay for, and more minute pay-for-what-you-use pricing models. Compose’s Founder Kurt Mackey hinted at this possibility in their recent acquisition, while Amazon’s Aurora product already comes with detailed pricing that varies on how much storage you want to buy in advance, how much you want to buy on-demand for processing database instances during testing, and how much backup you want to store. Geographic prices will also come into play, especially when Aurora extends to regions such as the Asia Pacific, where Japanese consumption taxes come into effect.

IBM is a sponsor of The New Stack.

Featured image: “What is in your Kindle catalog” by Raymond Bryson is licensed under CC BY 2.0.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.