How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Services / Hardware / Operations

Improving Price Performance Lowers Infrastructure Costs

Aerospike observed 63% better price performance using an AWS Graviton2 cluster when compared with an equivalent x86 cluster. Check the details here.
Nov 29th, 2022 8:20am by
Featued image for: Improving Price Performance Lowers Infrastructure Costs

Today, in uncertain times, lowering infrastructure costs while maintaining predictable performance at an unlimited scale is paramount. Recently, Aerospike and Amazon Web Services conducted a benchmark study to investigate improving price performance for real-time applications development.

The benchmark revealed compelling price-performance results for Aerospike 6 running on Amazon AWS Graviton2 processors. Aerospike observed 63% better price performance using a Graviton2 cluster when compared with an equivalent x86 cluster. Higher transaction throughput rates and lower annual cluster costs drove this price-performance advantage.

Figure 1: Price performance (est’d annual cluster cost/number of transactions processed per second) was 63% better on the Graviton2 cluster.)

The performance test compared Graviton2 and x86 clusters with the same number of virtual CPUs running a read-only, real-time, typical Ad Tech customer workload on Aerospike. Both clusters completed 99% of all transactions in less than one millisecond. The Graviton2 cluster processed 25 million transactions per second (TPS), while the x86 cluster processed 21.1 million; this translated to an 18% higher throughput rate for the Graviton cluster, and the annual cost for the Graviton2 cluster was 27% less, as shown in Figure 1.

Figure 2: Transaction throughput (TPS rate) was 18% better on the Graviton2 cluster.

The technology advances pioneered by Aerospike and AWS can also help firms cut their carbon footprints to support green goals. For example, AWS estimated that the Graviton environment used in this performance test cut carbon emissions by an estimated 49% over the x86 environment while fulfilling the benchmark’s aggressive transaction throughput and data access latency targets.

Aerospike on AWS Graviton2

To demonstrate the cost efficiency for operational workloads on AWS Graviton2 processors, Aerospike benchmarked its server platform on two EC2 topologies: one using Graviton processors and another using x86 processors. The goal was to explore how Aerospike’s leveraging of Graviton CPUs translates into tangible price-performance benefits by conducting a side-by-side comparison. The results were revealing. Historically, Aerospike is known for its ability to extract considerable efficiencies from the processor, storage, and networking improvements from its hardware partners.

Workload and Instances

Workload and instances Aerospike, in conjunction with AWS, ran a CPU-intensive workload with 300 asbench1 processes connecting to Aerospike database version 6.2. Each of the databases contained 2 billion unique records. Benchmark clients methodically ramped up the number of transactions executed on the clusters and executed more than 20 million read-only transactions per second. While the TPS was increased, each cluster was monitored to determine the point at which the 99 percentile latencies for those transactions exceeded 1 ms. This recorded “TPS under the 1ms SLA” was used to compare the two clusters.

Each cluster was run in a single AWS availability zone within US East. And both clusters contained the same total number of vCPUs. The Graviton2 cluster consisted of 3 c6gn.16x large nodes contain 64 vCPUs, 128 GiB memory, and a network bandwidth of 100 Gbps. The non-Graviton cluster consisted of 2 m5n.24x large nodes containing 96 vCPUs, 384 GiB memory, and a network. Note that the tests were conducted entirely in memory, so unused memory for the same data size would not affect the processing, only the vCPUs (which were held constant).

On each cluster, Aerospike was configured for in-memory storage (i.e., to retain user data and index data in memory). This is one of several Aerospike deployment options and the one best suited to create CPU-heavy workloads, as this test focused on the CPU processing capabilities.

Each Aerospike system used a replication with a factor of 2, which provides high data availability in most failure scenarios and is often used in production Aerospike environments, but the read-only workload means that the replication factor does not influence the TPS.


Using Amazon’s online pricing calculator and other publicly available data, Aerospike and AWS sought a reasonable cost comparison of the two environments. To do so, we considered the hourly cost per node used in each cluster based on prevailing US East rates using the 1-year upfront Linux on-demand pricing structure. For each Graviton2 node, this was $2.7648 per hour. For each x86 node, this was $5.7120 per hour. Assuming round-the-clock daily usage for each cluster resulted in an estimated annual cost of $72,659 for the Graviton2 cluster and $100,074 for the x86 cluster.

Aerospike and AWS

Aerospike and AWS partner with key cloud and hardware vendors to ensure its platform can leverage new technologies as they emerge. With Amazon, this includes the exploitation of Graviton processors.

Based on Arm architecture, AWS Graviton processors feature custom silicon and 64-bit Neoverse cores, delivering lower power consumption, stronger price performance, lower latencies, and better scalability than other alternatives. Well suited for high-performance computing, machine learning, in-memory caches, and other applications, Graviton is a natural fit for Aerospike. To achieve exceptional price performance for real-time workloads with Aerospike and AWS Graviton, customers can maximize cost efficiency without compromising on aggressive SLAs or inhibiting future business growth.

Although not showcased in this benchmark, Aerospike also leverages Amazon’s latest Nitro SSD technology (im4gn and is4gen), which can deliver up to 60% lower latencies and up to 75% lower latency variability than AWS i3 and i3en instances. For applications better suited to an all-SSD or hybrid configuration of Aerospike (with indexes in DRAM and user data on SSDs), Aerospike’s ability to efficiently use Nitro SSD technology provides added performance and cost benefits. For more details, see this 2021 presentation from AWS and Aerospike.

Finally, energy-conscious firms may find that running Aerospike on AWS can cut carbon emissions significantly compared with other alternatives. Indeed, a recent IEEE paper that explored infrastructure and energy costs of Aerospike and Cassandra deployed on AWS calculated that Aerospike’s software efficiencies can lower costs and carbon emissions by 80% (see Table 18 in that paper). Furthermore, simply moving from an on-premises infrastructure to a cloud infrastructure can result in substantial energy savings. By one estimation, an AWS infrastructure is 3.6x more energy efficient than the median of US enterprise data centers.

Where Other Approaches Fall Short

Cost-effective operational data management places incredible demands on database infrastructures and IT organizations. Performance, operational ease, elasticity, availability, data consistency, enterprise integration, and cost efficiency are common — and vexing — pressure points.

Many open source and commercial solutions simply can’t manage high-volume mixed workloads without critical shortcomings in one or more essential areas. For example, relational DBMSs often integrate well with other software and provide strong data consistency guarantees but can’t deliver ultra-fast performance at scale with low TCO. Certain open source and commercial NoSQL systems offer faster, less expensive alternatives than relational DBMSs but suffer from operational complexity, unpredictable performance, and sprawling server footprints as databases grow. Traditional caching systems might offer initial relief but often exhibit erratic latencies at the terabyte scale (and beyond), introduce the additional application and operational complexities, and drive up TCO.


The latest benchmark from Aerospike and AWS set a new bar for price performance for real-time workloads. Aerospike on AWS Graviton2 processors delivered 63% better price-performance compared with x86 environments while processing 21-25 million read transactions per second (TPS); 99% of these transactions were completed in less than 1 millisecond. Furthermore, running Aerospike on Graviton can result in 49% reduced carbon emissions compared with other alternatives. For this benchmark scenario, Aerospike saved significantly on carbon emissions by running the workload on the Graviton cluster rather than the x86 cluster. I invite you to learn more in my colleague’s blog here.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.