What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
Super-fast S3 Express storage.
New Graviton 4 processor instances.
Emily Freeman leaving AWS.
I don't use AWS, so none of this will affect me.
CI/CD / Software Development / Tech Life

The Future Stack of Code Review

Like many engineering practices that have undergone automation, we believe code reviews can be optimized, playing to the disparate strengths of humans and machines.
Oct 4th, 2022 1:00pm by
Featued image for: The Future Stack of Code Review
Feature image via Pixabay

In today’s traditional code review processes, humans are a big part of how code is reviewed and ultimately committed to production. A person reads and comments on code to ensure it undergoes a peer review and is aligned with engineering processes and best practices.

Code reviews started becoming an integral part of software delivery when code quality became a critical aspect to the business. Peer code reviews were initiated to enable additional “eyes” and input on code, improvements and possibly catch bugs and fixes early before it’s shipped to production.

However, like all aspects of software engineering, even the code review has greatly evolved, and today’s code review needs to ensure a diversity of aspects before code is shipped to production, including:

  • Being aligned with clean code practices
  • Ensuring readability
  • Verifying quality and preventing potential bugs
  • Checking performance
  • Validating security

Each of these individually requires its own unique domain expertise. Not every engineer can be an expert in them all. The reality is that there is so much code and experience available that so much of our code is boilerplate and repetitive — to the extent that we’re witnessing tools like GitHub’s AI programmer CoPilot. While it’s debatable how they trained their models and the code they use, this is quite mind-boggling from a coding perspective and changes the game entirely. If a machine can write working code, then reviewing it should be a no-brainer.

Optimize Processes for Machines and Humans

Like many other engineering disciplines and practices that have undergone automation, we believe that future code reviews too can be optimized, deriving benefits from creating hybrid processes that will play to the disparate strengths of humans and machines in the process.

In the same way that our CI/CD processes and architectures have benefited from tools to automate the many reviews and gates required before shipping and deploying our code to production, code review can also undergo a similar evolution and transformation.

The nature of humans vs. machines highlights another benefit. Humans are subjective, while tools are objective. Sentient people will make quite different decisions and judgment calls that may not even be substantiated in data or common patterns, but subjective and based upon recent experience (Recollection or anchoring bias anyone?).

The future of code reviews, like CI/CD and many other automated processes in engineering, should strive to reverse the paradigm with about 80% performed by tools and machines with 20% human validation and intervention.

The Ideal Code Review Process

Today’s code reviews are for the most part still manual, waiting on a human to pick up a pull request (PR), review it, and then merge it into the codebase to be deployed with the next version. So much of this process is outdated and can be optimized for velocity, with the growing number of tasks and disciplines developers now need to be responsible for.

We believe that like other engineering domains that have evolved and realized many benefits in the form of velocity and efficiency, our code review processes can also be reconsidered with the dawn of new and excellent machines and tools. If we were to think about the ideal (future) code review it would be something like this:

This ideal code review starts manual. A good practice for any automation we’d like to apply: start manual, validate the process, and then automate. With each code review the human reviewer identifies comments for each PR that can be automated and works on implementing or training a tool that can automate this specific validation or check in the next code review.

In this way, the human domain expertise is encapsulated in the automation we apply and is not simply comprised of simple machine-driven tests that aren’t based upon human experience.

Of course, good tools like linters, scanners and more already exist to ensure repetitive and common errors, misconfigurations and other poor coding practices do not reach a PR or production code. A best practice is to use these as part of other automated checks, even before they reach the code review.

Minimizing Noise in Automated Code Reviews

We know what you’re thinking: More automation = more noise. Like all automation and machine-based tooling, this, too, could create too much noise for each PR, which would cause humans to skip them. So how can we prevent automated code reviews from creating too much noise for engineers already suffering from alert fatigue, all while maintaining velocity?

The key to reducing noise is with aspects of remediation. Information is great, but it doesn’t help me if I don’t actually know how to resolve the issue. This is where intelligent auto-remediation comes in. (Without compromising safety of course).

Here we would apply the knowledge a previous reviewer embedded into the system, through more conversational interaction (such as a bot) that serves as an automated code reviewer without bypassing the human intervention and final push to production. This is for the “hard skills” perspective, quality, styling, bugs and misconfiguration.

But there is also the “soft skills” value that code review brings into engineering organizations that can’t and should not be overlooked, and might even provide the greatest value when it comes to code review. I recently asked on Twitter, what the purpose of code reviews are, and was surprised by some of the feedback, including this particular response:

While we can always leverage tools, robots and machines for repetitive and simple tasks, these tools lack the ability to provide true learning and mentoring, one of the most important aspects in code review, to some engineering managers.

An example of a way that we can use both the humans and robots in the process to derive the utmost value from each is by leaving the “nitpicking” and minor fixes — from typos to APIs that lack important headers — to be enforced by the machines and automated scans.

The human in the process can provide input and insight that is aimed at greater improvement, skilling up and mentorship, that is based on human expertise. They can provide comments to help create more performant code or elegant and clean code, than just making sure it works as it should and has no typos or misconfigurations.

When we add tools that give us greater context about the criticality of this piece of code to our systems, have the machines provide the repetitive fixes, and let the humans provide the added layer of insight to how these fixes can affect our systems as a whole, we can grow and learn and gain greater perspective from the review.

Imagine how much more useful this is to the engineer whose code is being reviewed and the greater trust they’ll have in the review process when they receive such a holistic overview of both the code and its context within the systems.

Jit and the Next-Gen of Code Review

One of the core things that security as code (SaC) ultimately enables is automation, and this is the engineering mentality that Jit is striving to take an active part in driving. By exposing security plans as code, the security gates that are now critical as part of the code review process are much easier to automate and ultimately resolve with minimal human intervention. This frees up humans to review the truly complex problems and leaves the repetitive manual checks to machines.

Another area that is gaining momentum is re-examining PR processes in general, and rethinking whether all PRs should receive the same human attention. Today there are many tools looking to streamline PR management, like LinearB’s gitStream or MergeQueue.

These tools enable you to create smart rules to skip human review on PRs that don’t require the same level of scrutiny — config and version updates, documentation edits and such. These should also become integral parts of the future code review.

Manual code review processes can also be automated to handle the parts machines are better equipped to fix than humans.

With the growing complexity of software delivery, let’s reserve our human time for the places where humans can provide the most value and harness machines for the repetitive tasks they excel at.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Jit.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.