DevOps Quality Metrics Ranked: From Overrated Distractions to Hidden Gems
Tricentis sponsored this post.
DevOps dramatically changed how we develop and deploy software. With continuous everything, knowing whether each new release will ultimately enhance or undermine the overall user experience is essential. Yet, most of today’s go/no-go decision still hinge upon quality metrics designed for a different era.
Every other aspect of application delivery has been scrutinized and optimized for DevOps. Why not re-examine quality metrics as well?
Are “classic” metrics like the number of automated tests, test case coverage, and pass/fail rate important in the context of DevOps, where the goal is an immediate insight into whether a given release candidate has an acceptable level of risk? What other metrics can help us ensure that the steady stream of updates are truly fit for production?
To provide the DevOps community an objective perspective on what quality metrics are most important for DevOps success, Tricentis commissioned Forrester to research the topic. The results are published in a new ebook, “Forrester Research on DevOps Quality Metrics that Matter: 75 Common Metrics—Ranked by Industry Experts.”
How We Determined What Metrics Really Matter
Here’s a look at the process:
- Survey 603 global enterprise leaders responsible for their firms’ DevOps strategies;
- From that sample, identify the firms with mature and successful DevOps adoptions (157 met Forrester’s criteria for this distinction);
- Learn what quality metrics those experts actually measure, and how valuable they rate each metric that they regularly measure;
- Use those findings to rate and rank each metric’s usage (how often experts use the metric) and value (how highly experts value the metric);
- Compare the DevOps experts’ quality metric usage vs that of DevOps laggards. If there was a significant discrepancy, the metric is considered a “DevOps differentiator.”
The 75 DevOps quality metrics were divided into four categories:
- Functional validation;
- Integration testing;
- End-to-end regression testing.
For each category of quality metrics, we came up with a heat map showing usage vs. value rankings. We also plotted the data for each metric into a quadrant with four sections:
- Hidden gem: Metrics that are not used frequently by DevOps experts, but are consistently rated as valuable by the organizations who measure them;
- Value-added: Metrics that are used frequently by DevOps experts and consistently rated as valuable by the organizations who measure them;
- Distraction: Metrics that are not used frequently by DevOps experts, and not rated as valuable by the organizations who measure them;
- Overrated: Metrics that are used frequently by DevOps experts, but not rated as valuable by the organizations who measure them.
The 20 Most Important Metrics
Globally, the following 20 metrics were ranked as the most valuable by the DevOps experts who actually measure them.
Quality metrics from the “Build” category ranked as follows:
“DevOps Differentiators” are metrics that DevOps experts/leaders measure significantly more than DevOps “laggards.”
Forrester provided the following commentary on Build metrics:
“When measuring builds, unit testing done well matters. Counting unit tests is a waste of time but understanding change impact matters. Tracking ‘unit’ tests prioritized by risk is the key. As the code base evolves, developers and testers need immediate feedback about change impact. This feedback is significantly more actionable if prioritized by level of risk.
“Sixty-three percent of these firms consider the number of unit tests prioritized by risk as one of their top desired metrics. But far fewer can actually do so — while 34% of advanced DevOps firms track the number of unit tests run, only 27% prioritize by risk. And less advanced DevOps firms use it even less — just 15% can track the metric today. Other important metrics tracked in builds focus on ensuring code quality — like the number of successful code builds (61%), unit test pass/fail rate (60 percent), and total number of defects identified (59%).”
Here‘s a quick look at how Build metrics are positioned vs. one another — based on the raw data collected from DevOps experts. DevOps Differentiators are highlighted in green.
The heat maps and quadrant rankings for the other three categories (Functional Validation, Integration [API] Testing, and End-to-End Regression Testing), definitions of all 75 metrics, and some fun lists (most overrated, top hidden gems…) are available in the complete DevOps Quality Metrics ebook.
3 Key Takeaways
As either a teaser for (or recap of) the complete ebook:
- Understanding of business risk is a critical factor in DevOps success. Once organizations reframe the way they think about risk, they also alter their quality metrics to help them better understand the level of risk in their release cycle;
- DevOps experts focus more on contextual metrics (e.g., requirements coverage, risk coverage) while others focus on “counting” metrics (e.g., number of tests);
- DevOps experts are more likely to measure the user experience across an end-to-end transaction while others rely on application-specific or team-specific metrics.
Ultimately, this underscores the fact that DevOps success requires much more than increased automation and a shiny new toolset. A broader transformation is required to align on business risk and release with confidence. It’s not easy. However, the effort truly pays off in terms of enabling the team to deliver better software faster.
Feature image via Pixabay.