Doing DynamoDB Better, More Affordably, All at Once

It’s easy to understand why so many teams have turned to Amazon DynamoDB since its introduction in 2012. It’s simple to get started, especially if your organization is already entrenched in the AWS ecosystem. It’s relatively fast and scalable, especially in comparison to other NoSQL options that offer a low learning curve like MongoDB. And it abstracts away the operational effort and know-how traditionally required to keep the database up and running in a healthy state.
But as time goes on, drawbacks emerge, especially as workloads scale and business requirements evolve. Factors like lack of transparency into what the database is doing under the hood and the 400 kilobytes limit on item size cause frustrations. However, the vast majority of decisions to move away from DynamoDB boil down to two critical considerations: cost and cloud vendor lock-in.
Let’s take a closer look at those two major DynamoDB challenges and at a new approach to overcoming them — a technical shift with a simple migration path.
DynamoDB Challenge 1: Cost
With DynamoDB, what often begins as a seemingly reasonable pricing model can quickly turn into “bill shock,” especially if the application experiences a dramatic surge in traffic.
What influences DynamoDB cost? Data storage, write and read units, deployment region, provisioned throughput, indexes, global tables, backups — to name some of the many factors. Although (uncompressed) data storage and the baseline number of reads and writes are the primary components driving a monthly DynamoDB bill, there are several other aspects helping to skyrocket prices. When following a “pay per operations” service model, your ability to accurately predict the cost of a workload on DynamoDB highly depends on how much the workload in question is subject to variability and growth.
Write-heavy, latency-sensitive workloads are typically the main contributing factors to alarmingly high bills. For example, a single write capacity unit (WCU) is equivalent to a non-transactional write of up to 1 KB per second. If you decide to purchase reserved capacity, then 100 WCUs are charged as 0.0128 USD per hour (or $150/year), given a one-year commitment. If your workload requires as little as 100KB writes per second, then you would need to make at least a $150,000/year investment to sustain only the baseline writes, without considering other aspects.
That’s the scenario a large media-streaming company faced when reviewing its DynamoDB bills. For a single use case, it had an operations-per-second baseline of half a million writes. Since the use case required multiregional replication, it used Amazon DynamoDB’s global tables feature. Such a high throughput workload, combined with replication and other aspects, meant the team was spending millions per year on just a single use case.
Moreover, teams with latency-sensitive workloads and strict p99 requirements typically rely on DynamoDB Accelerator (DAX) to achieve their SLA targets. Depending on how aggressive the service-level agreements are, caching solutions can easily comprise a considerable amount of a DynamoDB bill. For example, a small three-node DAX cluster running on top of r3.2xlarge instances is priced as high as $2,300 per month, or $27,600 a year.
DynamoDB Challenge 2: Cloud Vendor Lock-In
When your organization is all in on AWS, DynamoDB is usually quite simple to add to the larger AWS commit. But what happens to your database if the organization later makes a high-level decision to adopt a multicloud strategy or even move some applications to on premises?
One of the drawbacks of DynamoDB is that it is a proprietary and closed source. DynamoDB’s development, implementation, inner workings and control are confined to the AWS ecosystem, which inflicts a distinct pain. As you decide to switch to a different platform (cloud or on premises), you’ll need to look for alternatives.
Although AWS provides DynamoDB Local to run DynamoDB locally, the solution is primarily designed for development purposes and is not suitable for production use. And if your organization wants to extend beyond the AWS ecosystem, moving to a different database isn’t easy.
Evaluating a different database requires engineering development time and involves a careful analysis of the compatibility across two solutions. Later in the process, migrating the data and users from one solution to another is not always a straightforward process — and based on what we’ve heard, AWS is unlikely to assist.
Depending on how large your DynamoDB deployment became, your company could be locked into it for a long time, as re-engineering, testing and moving an entire fleet of DynamoDB tables from various use cases will require a lot of planning and, fairly often, a lot of time and effort.
For example, a large AdTech organization decided to switch all of its cloud infrastructure to a different cloud vendor. It needed to migrate from DynamoDB to a database that would be supported in the new cloud environment it had committed to. It was apparent that a database migration could be challenging because 1) the database was supporting a business-critical use case, and 2) the original application developers were no longer part of the company, so moving to a different solution could potentially incur a major application rewrite. To avoid business disruption as well as burdensome application code changes, it sought out DynamoDB-compatible databases that would offer a smooth path forward.
How ScyllaDB Helps Teams Overcome DynamoDB Challenges
These are just two of the reasons why former DynamoDB users are increasingly moving to ScyllaDB, which offers improved performance over DynamoDB with lower costs and without the vendor lock-in.
ScyllaDB allows any application written for DynamoDB to be run, unmodified, against ScyllaDB. It supports the same client SDKs, data modeling and queries as DynamoDB. However, you can deploy ScyllaDB wherever you want — on premises or on any public cloud, AWS included. ScyllaDB provides lower latencies without DynamoDB’s high operational costs. You can deploy it however you want via Docker or Kubernetes or use ScyllaDB Cloud for a fully managed NoSQL Database as a Service solution.
Reducing Costs: iFood
Consider iFood, the largest food delivery company in Latin America, which moved its Connection-Polling service from Postgres to DynamoDB after passing the 10 million orders per month threshold. But the team quickly discovered that DynamoDB’s autoscaling was too slow for the application’s spiky traffic patterns.
iFood’s bursty traffic naturally spikes around lunch and dinner times. Slow autoscaling meant it could not meet those daily bursts of demand without a high minimum throughput, which was expensive, or the team managed scaling themselves, work they were trying to avoid with a fully managed service.
At that point, they transitioned their Connection-Polling service to ScyllaDB Cloud, keeping the same data model they built when migrating from PostgreSQL to DynamoDB. iFood’s ScyllaDB deployment easily met its throughput requirements and enabled the company to reach its midterm goal of scaling to support 500,000 connected merchants with one device each. Moreover, moving to ScyllaDB reduced the database cost of the Connection-Polling service alone from $54,000 to $6,000.
Freedom from Cloud Vendor Lock-In: GE Healthcare
GE Healthcare’s Edison AI Workbench was originally deployed on AWS cloud. But when the company took it to its research customers, the customers said, “This is great. We really like the features and we want these tools, but can we have this Workbench on premises?”
Since DynamoDB was a core component of the solution, the company had two choices: rewrite the Edison Workbench to run against a different data store or find a DynamoDB-compatible database that could be deployed on premises.
The team recognized the challenges involved with the former option. First, porting a cloud asset to run on premises is a nontrivial activity, involving specific skill sets and time-to-market considerations.
Additionally, the team would no longer be able to perform the continuous delivery practices associated with cloud applications. Instead, they would need to plan for periodic releases as ISO disk images, while keeping codebases synchronized between cloud and on-premises versions. Thus, maintaining a consistent database layer between cloud and on-premises releases was vital to the team’s long-term success.
So, they opted for the latter option and moved to ScyllaDB. “Without changing much, and while keeping the interfaces the same, we migrated the Workbench from AWS cloud to an on-premises solution,” explained Sandeep Lakshmipathy, director of engineering at GE Healthcare. This newfound flexibility enabled them to rapidly address the requested use case: having the Edison Workbench run in hospitals’ networks.
How ScyllaDB and DynamoDB Compare on Price and Performance
To help teams better assess whether a move makes sense, ScyllaDB recently completed a detailed price-performance benchmark analyzing:
- How cost compares across both DynamoDB pricing models under various workload conditions, distributions and read:write ratios.
- How latency compares across a variety of workload conditions.
You can read the detailed findings in this comparison report, but here’s the bottom line: ScyllaDB costs are significantly lower in all but one scenario. In realistic workloads, costs would be five to 40 times lower with up to four times better P99 latency.
Here is a consolidated look at how the DynamoDB and ScyllaDB compare on cost and performance for just one of the many workloads we tested (based on prices published in Q1 of 2023). DynamoDB shines with a Uniform distribution and struggles with the others. We chose to highlight a case where it shines.
Additionally, here are some results for the more realistic hotspot distribution:
Again, we encourage you to read the complete benchmark report for details on what the tests involved and the results across a variety of workload configurations.
Is ScyllaDB Right for Your Team’s Use Case?
Curious if ScyllaDB is right for your use case? Sign up for a free technical consultation with one of our architects to talk more about your use case, SLAs, technical requirements and what you’re hoping to optimize. We’ll let you know if ScyllaDB is a good fit and, if so, what a migration might involve in terms of application changes, data modeling, infrastructure and so on.
How much could you save by replacing DynamoDB with an API-compatible alternative that offers better performance at significantly lower costs — and allows you to run on any cloud or on premises? For a quick cost comparison, look at our pricing page and cost calculator. Describe your workload, and we’ll show you estimates for ScyllaDB, as well as other NoSQL (Database as a Service) DBaaS options.