Cloud Services / Tools / Sponsored / Contributed

3 Tips for Benchmarking and Provisioning Google Cloud 

2 Feb 2021 7:33am, by

Jessica Edwards
Jessica joined Cockroach Labs to spread the word about a database that will be changing the game for tech companies around the world. She is obsessed with crafting technical stories and has led similar marketing efforts at AppNexus and MediaMath. When Jessica's not at work, she runs along the East River in New York, relishes a good adventure novel and catches live music wherever she can find it.

Engineers on cloud performance teams can spend their entire workday tuning and optimizing cloud configurations. But if that’s not your job description, you’ll need to know best practices for benchmarking and optimization. This article digs into some key recommendations, including tips for getting the most out of Google Cloud Platform (GCP).

In 2018, while executing regular testing in preparation for an upcoming version release, we made a curious discovery: the throughput for Amazon Web Services test clusters was 40% higher than the throughput for GCP test clusters.

This finding led to further internal investigation into cloud performance. We then formalized our testing and results, and shared those findings in what became the 2018 Cloud Report.

Each year, we revisit and finetune the benchmarks, selecting tests that are open source and reflective of real-world applications and workloads. Now in its third year, the Cloud Report from Cockroach Labs continues to evolve with input from the open source community, the CockroachDB community and the cloud providers.

One conversation in particular with the GCP team led to some revealing insights into how to benchmark, and ultimately, configure GCP to be most optimal for your workloads. We’ve gathered some of those insights below, with the full conversation available here.

#1: Use a Performance Benchmarking Tool on Your Cloud

Performance benchmarking tools like the open source PerfKit Benchmarker (which is maintained by the Google Cloud team) allow anyone to measure the end-to-end time to provision resources in the cloud. PerfKit reports on standard peak performance metrics, including latency, throughput, time-to-complete, and Input/Output Operations Per Second (IOPS).

A benchmarking tool should serve to provide an understanding about what’s happening in an environment, while including offering latency metrics between components in different regions. To that end, PerfKit offers a publicly available dashboard showing cross-region network latency results between all Google Cloud regions. Below are the results of Google’s own all-region to all-region round trip latency tests, using n1-standard-2 machine types and internal IP addresses. Anyone can reproduce the results themselves by running a snippet of code available on the PerfKit site.

In addition to tools like PerfKit, there are a number of resources to help GCP users get the best performance out of their product. The blog post, “Performance art: Making cloud network performance benchmarking faster and easier,” and a follow-up report on measuring networking latency in the cloud, can help you get started with Google cloud benchmarking and data collection.

#2: Read Benchmarking Research

Cockroach Labs set out to better understand customer needs by conducting original research. This process first involved gauging how well CockroachDB performed while running in cloud environments from different providers. When the team discovered a significant difference in performance between AWS and GCP, it published its inaugural cloud report in 2018 to help customers make informed decisions when choosing a cloud provider. The 2021 version of the Cockroach Labs Cloud Report goes even further, using a series of microbenchmarks and typical customer workloads — such as  CPU, network, storage, and a derivative of TPC-C — to compare the performance of AWS, Azure and GCP.

The Cloud Report benchmarks cloud providers against transactional (OLTP) workloads. As the researchers noted in the report and in the reproduction steps, all of the benchmarks were selected with transactional workloads in mind. A machine learning-focused workload may be better served by using a different set of benchmarks to compare cloud performance.

#3: Evaluate Workloads Before Configuring GCP

One of the most common questions when setting up a cloud deployment is: should I use the provider’s default configurations?

When Cockroach Labs set out to benchmark AWS, Azure and GCP, it needed to have enough constant factors between the three providers to ensure accurate results. The team accomplished this by using each provider’s defaults, so that misconfigurations or configuration bias wouldn’t affect the testing outcomes.

For users, default configurations may be ideal for some workloads. Before altering the default machine configurations, consider the types of machines (family, series, machine type, etc.) that are being offered — for example, N2 with Intel versus N2D with AMD — and evaluate whether one may be better suited for your workload. One of the discoveries in the 2021 Cloud Report, was that machines running Intel CPU processors performed exceptionally well on single-core tests, but machines running Amazon’s Graviton2 and AMD performed better on the multicore tests.

Learn More About Provisioning and Benchmarking

Optimizing and benchmarking your cloud infrastructure involves a lot of nuance and fine-tuning. Our suggestions above offer a starting place. For more advice on benchmarking and provisioning GCP, listen to the full conversation between GCP and Cockroach Labs.

Feature image via Pixabay.

The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Real.

A newsletter digest of the week’s most important stories & analyses.