CI/CD / Culture / DevOps

Test Automation for Software Development

5 Oct 2021 4:30am, by

Automating software and security testing in software development is an ongoing process, yet truly reaching full automation may never happen. In SmartBear Software’s “2021 State of Software Quality | Testing” the percentage of organizations that conduct all tests manually rose from 5% in 2019 to 11% in 2021. This does not mean that automation is not happening. On the contrary, both manual and automated tests are being conducted.

The biggest challenge to test automation is no longer dealing with changing functionality but instead not having enough time to create and conduct tests. Testers are not being challenged by demands to deploy more frequently but instead to test more frequently across more environments. Testing of the user interface layer is more common, and to address this 50% currently conduct some automated usability testing as compared to just 34% in 2019.

The remainder of the article provides additional highlights on this and two other reports that highlight DevSecOps metrics and practices. The ability to actually enforce the security policies being declared in policy-as-code implementations may be a key to not only automating the identification of problems, but also their resolution.

SmartBear Software’s “2021 State of Software Quality | Testing”

  • Test Automation Coverage Dropped: Fewer organizations can claim to have automated more than three-quarters of their application and API tests. Those doing less than 75% of their tests manually rose from 24% in the 2019 study to 37% in 2021. Oftentimes, automating one test provides more time to manually test for something else.
  • Lack of Time Surges as a Test Automation Challenge: More than twice as many respondents said a lack of time is the biggest challenge for test automation, up from 17% in 2019’s study. Testers are not being challenged because applications are being deployed more frequently but instead because they are being asked to test more frequently across more environments. Performance testing of APIs and web services, the UI layer and databases all become more common in 2021.
  • Usability Testing Gains Prominence: Currently, 29% are doing performance testing of the UI layer, compared to just 9% in 2019. Automation of usability tests is much more common today, with 50% doing some automated testing as compared to just 34% in 2021. The rising prominence of usability is being driven by two factors. First, an increased focus on the end user by SREs. Second, an improvement in the provisioning of synthetic data for test environments. Creating synthetic data used to be the biggest challenge to automating UI tests (dropping from 18% in 2019 to 5% in 2021), but validating that appropriate use cases are being tested jumped is now by far the biggest challenge (going from 11% to 33%).
  • We’re Being Cautious Analyzing the Data: 81% of the 2,092 respondents are involved in testing, which is up from 66% in the 2019 study. The percentage of participants coming from North America (41% to 25%), Internet and Web Services industries (39% to 26%) and companies with 1,000 or more employees (36% to 27%) all dropped significantly since the last study. We did not write about questions in which these changes may have had a large impact. In addition, we are not reporting on a few charts for which we found discrepancies between the current report’s data and what was in the 2019 version.
  • Previously on The New Stack

Chart titled "Test Automation coverage decreased dramatically."

 

Source: “SmartBear’s 2021 State of Software Quality | Testing”

Source: “SmartBear’s 2021 State of Software Quality | Testing”

Source: “SmartBear’s 2021 State of Software Quality | Testing”

Cloud Security Alliance’s “State of Cloud Security Risk, Compliance, and Misconfigurations”

Sleuth and LaunchDarkly’s “Hyperdrive: A Continuous Delivery Report”

  • Sleuth is one of several companies that track the DORA metrics at the DevOps report Google published last week. The core of the Google’s “Accelerate State of DevOps” reports, so it probably isn’t a coincidence that along with LaunchDarkly it published its own report based on a survey of over 200 software developers. The findings are in alignment with those we’ve read in several recent reports.
  • Team Processes Not Scapegoated: Over 60% of respondents that do not use feature flags say team processes are blamed when deployments get behind schedule, but that drops to less than 20% among everyone else. The onus instead goes to executives, with more than 70% of developers blaming them for falling behind.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: LaunchDarkly.

Participate in The New Stack surveys and be the first to receive the results of our original research.