Modal Title
Culture / Software Development

How Good Is Your Code Review Process?

You can build a code review process that drives efficiency rather than degrades it, but you’ll need to be intentional.
Aug 17th, 2022 10:00am by
Featued image for: How Good Is Your Code Review Process?
Feature image via Shutterstock.

Romain Dupas
Romain Dupas is director of software engineering at Code Climate, where he leads a team of passionate engineers committed to helping other engineering teams excel with data-driven insights. Prior to joining Code Climate, Romain served as head of engineering at NASDAQ Private Market for six years. Romain was born and raised in France, where he earned a master’s degree in software engineering.

If your team thinks code review is a waste of time, they’re right. It’s a self-fulfilling prophecy: Those who don’t value code review delay doing it, and their input is often trivial and stylistic in nature. If that becomes a pattern, code review can make your software development process worse, slowing down your software development lead time without improving the quality of your code. 

You can build a code review process that drives efficiency rather than degrades it, but you’ll need to be intentional. First, the organization must be aligned on code review objectives. Second, team leaders and division managers should regularly monitor code review metrics to make sure objectives are met and the process is healthy. Your objectives, and the metrics you use to gauge your success, will vary from project to project. Let’s break it down.  

Alignment: Get on the Same Page

An effective code review process starts with alignment on its objective. As a team, it’s important to determine which outcomes your review process is optimizing for. Is it catching bugs and defects, improving the maintainability of the codebase or increasing stylistic consistency? Maybe it’s less about the code and more about increasing knowledge sharing throughout the team? 

Determining priorities helps your team focus on what kind of feedback to leave or look for. Reviews that are intended to familiarize the reviewer with a particular portion of the codebase will look different from reviews that are guiding a new team member toward better overall coding practices. Once you know what an effective code review means for your team, you can start adjusting your code review activities to achieve those goals.

Reporting: Key Metrics

The metrics indicating a healthy code review process differ right from the goals, but with that caveat, there are a few trends every team lead should monitor. Regularly reporting Time to First Review, Review Coverage, Review Influence and Review Cycles metrics will allow you to quickly diagnose and address problems with your code review process.

1. Speed: Time to First Review

“Time to First Review” is the amount of time, on average, a submitter is left waiting for feedback. When this metric is high, submitters either waste time waiting or are compelled to open up multiple tracks of work. This results in an increase in their team’s PR inventory, which in turn, negatively impacts your team’s time to market. When teams are experiencing an elevated Time to First Review, it is crucial to align what the code review expectations are and how reviews can be better integrated into the day-to-day processes. Teams can start by setting a benchmark: Define what a “low” Time to First Review means across the industry and in the context of your organization. From here, team leaders can dive into individual and team metrics to better understand precisely why Time to First Review is high or inconsistent. Three data points you can look at to fully diagnose this slowdown are Pull Request Size, Workload Balance, and Review Speed. You want to lower this metric as much as you can, without it becoming a cost to the focus and productivity of the reviewers. If Time to Review is a problem, reprioritization or retooling might lead to the desired outcome.

2. Finding the Sweet Spot: Review Coverage

A healthy code review process is thorough, yet efficient. If your review goal is to improve code quality and identify bugs, it’s important to look at the number of files with comments in the review compared to the number of files changed in the pull request, aka “Review Coverage.” The quantity of changed files that receive at least one comment is a proxy for review thoroughness. There’s no optimal number for Review Coverage — the right number is different for every team — but you want to make sure it’s in line with your expectations. Typically, you want a sweet spot: not so low that it’s clear reviewers are simply rubber-stamping changes and not so high that reviewers nitpick and slow down the process. 

3. Measuring Impact: Review Influence

Low Review Coverage isn’t necessarily a bad thing: Maybe your code is just that good. If there is an issue, it can usually be addressed by placing greater team-wide emphasis on review or looking at how reviews are distributed to ensure no one’s getting overloaded. High Review Coverage can be a bit trickier: How can you tell the difference between thoroughness and nitpicking? When you see higher than average Review Coverage, it’s worth looking beyond the number of comments to understand what is actually being said: Is all this feedback actionable? To do this, you can monitor the percentage of comments that result in reply comments or subsequent changes in the code — at Code Climate, we call this metric “Review Influence.” A high number of comments with low influence can indicate that reviewers are providing feedback that isn’t being perceived as actionable or as valuable to the submitter. This may call for a realignment of how code reviews should be conducted. 

4. Going in Circles: Review Cycles

Pull requests bouncing back and forth between the author and a reviewer can be a significant drain on resources. This can include a comment, request for change, or approval by someone other than the PR author.  Each time a pull request is passed back and forth, developers are required to context switch and spend more time on one particular line of work. If this happens frequently, the review process can become a bottleneck to shipment and a source of demotivation to engineers who are trying to complete their tracks of work. 

The average number of times a PR is passed back and forth — the number of “Review Cycles” — can change for a lot of reasons. Onboarding new developers can cause it to spike, for example. When you see a spike in Review Cycles among an experienced team, however, it’s frequently a sign of misalignment. That could take a lot of forms, including: 

  • Differing ideas about what “done” means.
  • Misalignment around what kinds of changes are expected to come out of a review process.
  • Conflicting opinions about how a solution should be implemented.

If the number of Review Cycles is high for a particular submitter, it might mean that they’re struggling with the codebase or dealing with unclear requirements.

Code Review is one of the most difficult processes to get right on a software development team. A different ideal balance of thoroughness and speed exists on every team — and that balance might even change for a given team due to shifting priorities. 

Team leaders should monitor their code review on an ongoing basis, but that is just the start. Leaders also need to engage in frequent conversations with the team. These conversations are an opportunity to gather the insight that is necessary to give context to code review metrics and make them meaningful. Only then can the team work together to solve the issue, whether it’s feedback-related or something else. 

In addition, any time there’s a major change to the process or team structure, team leaders must take a hard look and re-evaluate their process — reaffirming goals and expectations. When the team is aligned on the goals and process for review, and the team leader takes proactive steps to ensure the team can keep their code review commitments, it’s possible to build a process that is thorough, helpful and efficient. The effects will be well worth it.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.