Modal Title

Kafka Benchmarking on AWS Graviton2, Graviton3 and AMD

As Kafka scales, cost is a concern, so understanding price/performance value of different platforms is key. Here, two engineers from Aiven benchmark Kafka against the latest x86 and ARM processors.
May 17th, 2023 10:00am by and
Featued image for: Kafka Benchmarking on AWS Graviton2, Graviton3 and AMD
Image by Marcel from Pixabay. 

Kafka is popular, there’s no question. Due to its high performance optimized for throughput, it can also scale to large amounts. As Kafka scales, cost is a concern, so the price/performance value is key to most organizations running Kafka. Price/performance is always a critical measure when we look at the infrastructure costs and the desired throughput and performance of the system.

With the increased deployment and usage of ARM architectures, and most especially Amazon Web Services‘ Graviton2, Graviton3 and Ampere, there is a need to understand the price/performance. In this analysis, we are focused on benchmarking these in various conditions.

There have been some great articles on the migration benefits of moving Kafka onto Graviton processors. Most of them have been specific to certain types of workloads, versus generic benchmarking. This is the most in-depth article: AWS Graviton2 and gp3 Support for Apache Kafka – DZone. There are also some good benchmarks published by Honeycomb on their migration Tuning Apache Kafka and Confluent Platform for Graviton2 using Amazon Corretto | AWS Developer Tools Blog. None of the blogs are looking at Graviton3, nor are any of them doing controlled testing, so we figured we would run some experiments.

Benchmark Setup

The setup we selected includes an Apache Kafka cluster running 3 nodes with the version 3.3. Every node will run the same kernel version (being 6.0.12). It will host 1 topic with 2000 partitions. The retention is set to 10GB for each partition and the retention time is 10 minutes. Every broker will host the same number of partitions.

In front of this cluster, we will run a scenario of 8 producers and 8 consumers sending an unlimited number of 500-byte messages.

Every benchmark runs for 5 minutes of warm-up, then 20 minutes of maintained load. Note that these benchmarks are measured at peak performance; however, all instances chosen here are throttleable instances. This means that peak performances can only be maintained for less than 1h.

Every producer will be an instance of; respectively, every consumer will be an instance of Each of these instances will run m6a.2xlarge nodes.

The nodes picked to compare the three different architectures will be:

  • Intel: m5.2xlarge
    • Specs: vCPUs: 8, RAM: 32 GiB, Physical processor: Skylake 8000 or Cascade 8000 series
    • On-demand price: $.38 per hour
  • Intel: m6i.2xlarge
    • Specs: vCPUs: 8, RAM 32 GiB, Physical Processor: Intel Ice Lake
    • On-demand price: $.38 per hour
  • AMD: m6a.2xlarge
    • Specs: vCPUs: 8, RAM: 32 GiB, Physical processor: AMD EPYC 7R13
    • On-demand price: $.35 per hour
  • AMD: m5a.2xlarge
    • Specs: vCPUs: 8, RAM: 32 GiB, Physical processor: AMD EPYC 7000 series
    • On-demand price: $.34 per hour
  • Graviton2: m6g.2xlarge
    • Specs: vCPUs: 8, RAM: 32 GiB, Physical processor: AWS Graviton2
    • On-demand price: $.31 per hour
  • Graviton3: m7g.2xlarge
    • Specs: vCPUs: 8, RAM: 32 GiB, Physical processor: AWS Graviton3
    • On-demand price: $.33 per hour

The disk setup was made up of 3 EBS volumes with enough extra I/O to not be throttled and allow for maximum throughput.


One of the most interesting things we discovered was the overall stability of different architectures under heavy load. x86 architectures (Intel and AMD) increasingly saw IOWait times increase until the cluster became unresponsive as the CPU was trying to handle IO and not serving requests properly, hence some metrics for this scenario appear much lower than others.

This is a direct result of the EBS limits of these instances, as EBS quotas were reached, the IO throttling had a very high impact on the capacity of the machine to handle IO requests, thus increasing IOWait times. On the other hand, we’ve realized that AMD instances with similar EBS capacity as Intel instances (m6a vs. m5) were less resilient to the IO throttling. As a result, they were more likely to cause instability to the cluster once EBS quotas were fully consumed.

Note that we ran the benchmark 3 times for each architecture, and the failures on AMD architecture instances were consistent.

Requests Per Second and Messages in and Client Latency

Too many of our Kafka customers focus on OS-level metrics, which are good to keep an eye on but not indicative of the actual performance of the application. We will get to those soon, but first, the most important metric for benchmarking is requests per second, and the number of messages being ingested from producers. We will also track the induced latency on the client side from when the message is sent.

One should expect the same number of requests on all tests, observing a lower number of requests per second highlights an induced latency perceived on producers’ and consumers’ side.

Requests Per Second and Messages In

Some clarification on the metrics measured below. “Messages in” is the number of incoming message rates per topic. This does not consider messages which are being consumed from Kafka, but only messages received by Kafka.

RequestsPerSec, on the other hand, considers requests for messages consumed and produced. In the graph of Network Metrics for data going out of the machine (Network Tx), m7g sends way more data than m6i on the network.

In the end, this reflects that — a bit more data — was produced against m7g, but a lot more data was consumed from it.

Requests Per Second

Although the requests per second showed measurable differences in the highest performing instances, the data below indicates that there was throttling and queuing occurring as the performance of messages per second was significantly closer between the two.

The data here is interesting, although Graviton3 does very well performance-wise, the Intel Ice Lake architecture is the best performing under load. Graviton3 is close.

Messages In

Every five seconds, we captured the MessagesInPerSec counter and aggregated it over the three machines. This measures the messages coming into Kafka per second. Intel outperformed or essentially matched Graviton processors. AMD did well but didn’t match the throughput.


We’ve observed less stability from AMD instances as compared to the other instance types. Those were the two types of machines not able to continuously handle the pressure put on the cluster, thus leading to these spiky graphs compared to m7g, m6g, m6i and m5. However, this isn’t the whole story as you start to look at the other metrics.

Producer and Consumer Latency

Looking at the latency is critical when measuring how the broker is performing. Here we have three charts breaking down the latency by 99th, 95th and 50th percentiles.

The first set of charts looks at the producer latency:


There is not a lot of variance between the architectures of the following processors. All are showing quite low latency under load. This is not that interesting, however, the story soon changes.

The second set of charts shows the consumer latency:


These graphs show there are issues with consumer latency, which strongly suggests that there was a bottleneck on M6a/m6g/m5a as they had much higher latency (here this was EBS bandwidth). Let’s dig into the metrics to determine which subsystems were showing the issue.

CPU Consumption

CPU consumption has been measured separately in 4 metrics:

  1. User Time
  2. System Time was generally the same and had small differences, so we didn’t include the data in the article
  3. IOWait Time
  4. Total CPU Time ( User + System + IOWait + IRQ )

User Time

Under higher load (8 producers/consumers), ARM machines have shown more stability and less CPU consumption than both Intel and AMD architecture. As mentioned above, Intel and AMD crashed multiple times during the last phase of the benchmark, leading to these numbers.

IOWait Time

The IOWait times are interesting as we increase loads, both the m6i and the m7g instances have lower IOWait times because of the higher network bandwidth, hence they are able to process more work than the other instances. Additionally, both of these architectures have higher memory bandwidth, reducing IOWait times. Furthermore, the disk subsystems in all instances were fully saturated based on the total EBS volume bandwidth, which we will show below.

Total CPU time

Total CPU time shows how busy the processor is, including the slices that we’ve reviewed above. This is an overview of the CPU usage including system, IRQ, user and IOWait times.

We see that due to the IOWait times, the processors which had the most system time available had the best performance. Below you can see the overall CPU usage is mostly driven by the higher IOWait times due to the IO subsystems being saturated.

Interestingly, m6i was the least overloaded processor, most were running near maximum percentages

Disk Performance Metrics

Here you can find the disk details from the tests showing the throughput of the disk subsystem. We were using the attached EBS volumes which were striped to allow for higher throughput.

Each of the instances disk subsystems were fully saturated. The following EBS volume throughput is the maximum for EBS in total at burst capacity (can only be reached for under 1h), which is highlighted below. For reference, you may see the sustained performance here: Amazon EBS–optimized instances – Amazon Elastic Compute Cloud.

  • m5 = 4750 Mbps / 594 MB/sec
  • m6i = 10000 Mbps / 1,250 MB/sec
  • m6a = 6666 Mbps / 833 MB/sec
  • m5a = 2880 Mbps / 360 MB/sec
  • m6g = 4750 Mbps / 594 MB/sec
  • m7g = 10000 Mbps / 1,250 MB/sec

Network Metrics

Note: m7g and m6i and m6a were able to handle approximately the same throughput, whereas m6g, m5a and m5 were showing other bottlenecks.

Note: m7g had the highest outbound traffic, which is due to the higher bandwidth available in this instance. Just for details:

  • m5 = 10000 Mbps / 1250 MB/sec
  • m6i = 12500 Mbps / 1563 MB/sec
  • m6a = 12500 Mbps / 1563 MB/sec
  • m5a = 10000 Mbps / 1250 MB/sec
  • m6g = 10000 Mbps / 1250 MB/sec
  • m7g = 15000 Mbps / 1250 MB/sec


Finally, wrapping up the benchmarks, we can see there is merit to the performance and stability of the ARM architecture chips versus x86 on Kafka. When we consider the price differences between these instances:

  • Intel: 2xlarge: $.384 per hour
  • Intel: 2xlarge: $.384 per hour
  • AMD: 2xlarge: $.3456 per hour (11% less than Intel)
  • AMD: 2xlarge: $.344 per hour (12% less than Intel)
  • Graviton2:2xlarge: $.308 per hour (25% less than Intel)
  • Graviton3: m7g.2xlarge: $.3264 per hour (18% less than Intel)

Another way to slice this data is by bringing in the cost per message and variance from the price/performance leader of the pack, which was the Graviton3 based m7g instances. (For data, see: Benchmarking Kafka : Price/Performance.)

  • Intel: 2xlarge: $.109 per 1 billion messages (21% higher cost vs Graviton3)
  • Intel: 2xlarge: $.0962 per 1 billion messages (7% higher cost vs Graviton3)
  • AMD: 2xlarge: $.103 per 1 billion messages (15% higher cost vs Graviton3)
  • AMD: 2xlarge: $.127 per 1 billion messages (41% higher cost vs Graviton3)
  • Graviton2: 2xlarge: $.102 per 1 billion messages (14% higher cost vs Graviton3)
  • Graviton3: m7g.2xlarge: $.090 per 1 billion messages (Price/performance leader)

The Graviton3 is clearly the price-performance winner of the competition and seems like the right choice for Kafka workloads based on these benchmarks. However, there are several things to note.

In summary, as far as performance goes, Intel Ice Lake is the best performer; however, older Intel architectures are behind Graviton3. AMD architectures fall short but have a similar performance profile to the older Graviton2 CPU. Graviton CPUs are more stable than Intel or AMD architectures when running under heavy load.

We hope to do more benchmarking on other platforms, if you want to see OpenSearch, PostgreSQL, or something else, please let us know Tweet @jkowall or @mweaeng! Or book a call with us!

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.