Tricentis sponsored this post.
Surprisingly, most organizations claim to deliver software with “acceptable business risk” — even though the majority does not actually measure it.
This is one of the key — and very surprising — findings from a new Tricentis-commissioned Forrester report “What Separates DevOps + Agile Leaders from Laggards?,” based on a survey of over 600 enterprise leaders who are responsible for their firm’s Agile and DevOps initiatives.
Most firms (80%), for example, believe they deploy software within acceptable business norms of business risk, but paradoxically, fewer than a quarter state their QA and testing processes completely cover business risk. Only 15% of respondents say their test suites reliably provide a good indication of acceptable business risk for their organization.
Here is an excerpt from the Forrester report that details the lack of visibility into acceptable risk and the resulting implications (the complete report can be downloaded from the Tricentis site):
Automating the software development life cycle is imperative for accelerating the speed and frequency of releases. However, without an accurate way to measure and track quality throughout the software development life cycle, automating the delivery pipeline could increase the risk of delivering more defects into production. And if organizations cannot accurately measure business risk, then automating development and testing practices can become a huge danger to the business.
For this study, we use the following definition of business risk: any application shortcoming that impairs the end user’s (or customer’s) expected experience and ultimately erodes confidence in the business. Business risk is different for every application and is compounded by the complexity of each organization’s architecture and transaction dependencies. If firms are unable to measure relative risk for each application as software is being designed, developed, integrated, and tested, then, ultimately, they will not know the risk that a release candidate carries. Tolerance for risk is solely dependent on senior management. No matter where a firm sets the bar for risk, firms must be able to continuously measure risk to ensure final products are delivered within levels of risk tolerance.
Firms looking to use DevOps to automate their software delivery must understand and accurately measure business risk. But even more advanced Agile+DevOps firms struggle to get an accurate view of risk through the software delivery pipeline. When comparing firms following Agile+DevOps testing best practices, we see that:
- Risk relevance is not on par with quality or speed. Most firms have not yet made the connection between speed, quality, and risk. Overall, the importance of risk in customer-facing software lags behind the established development goals of quality on time and on budget. Overall, only about a third of respondents say it’s very important for success that customer-facing software is delivered within acceptable business risk. When looking at firms that follow best practices, this number jumps to 50%, but risk still lags quality, on time, and on budget;
- Firms are confident in their abilities to deliver within acceptable risk … Most firms believe they deliver customer-facing products within acceptable business risk. A staggering 80% of respondents say they can often or always do this;
- … but admit there are gaps in their testing processes … Fewer than a quarter of firms think that their QA and testing processes completely cover business risk in all phases of testing. Although those firms that follow Agile+DevOps best practices do better, fewer than half (38%) cover risk completely in all testing phases;
- … and acknowledge the shortcomings of their test suites. Most firms recognize that their test suites do not always give them a good indication of business risk. In fact, just 15% of respondents say this is the case today — and nearly 40% say that they have a good indication sometimes or less often. Even more advanced Agile+DevOps firms see the limitations here: Fewer than one-third of them say their test suite always gives them a good indication of business risk.
Given that most firms, even the ones following continuous testing best practices, admit that their software testing processes have risk gaps and do not always give accurate measures of business risk, it stands to reason that the 80% who say they always or often deliver within acceptable risk may be overestimating their capabilities. And, given the critical importance of automating the software delivery pipeline to deliver faster, being able to say with certainty that a software release is both high-quality and within acceptable business risk is a crucial part of not just delivery automation, but also overall software delivery success.
The ‘Geek Gap’
This is your classic geek gap. Business leaders assume the definition of risk is aligned to a business-oriented definition of risk, but the technical team has it aligned to a very different definition of risk. This mismatch is your primary cause of overconfidence. For example, assume a tester saw 100% of the test suite ran with an 80% pass rate. This gives you no indication of the risk associated with the release. The 20% that failed could be an absolutely critical functionality, like security authentication, or it could be trivial, like a UI customization option that’s rarely used.
Many organizations are severely over-testing. They’re running massive test suites against minor changes — self-inflicting delays for no good reason. To start, organizations must take a step back and really assess the risks associated with each component of their application set. Once risk is better understood, you can identify practices that will mitigate those risks much earlier in the process. This is imperative for speed and accuracy.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Tricentis.