Test Automation for Software Development
Automating software and security testing in software development is an ongoing process, yet truly reaching full automation may never happen. In SmartBear Software’s “2021 State of Software Quality | Testing” the percentage of organizations that conduct all tests manually rose from 5% in 2019 to 11% in 2021. This does not mean that automation is not happening. On the contrary, both manual and automated tests are being conducted.
The biggest challenge to test automation is no longer dealing with changing functionality but instead not having enough time to create and conduct tests. Testers are not being challenged by demands to deploy more frequently but instead to test more frequently across more environments. Testing of the user interface layer is more common, and to address this 50% currently conduct some automated usability testing as compared to just 34% in 2019.
The remainder of the article provides additional highlights on this and two other reports that highlight DevSecOps metrics and practices. The ability to actually enforce the security policies being declared in policy-as-code implementations may be a key to not only automating the identification of problems, but also their resolution.
SmartBear Software’s “2021 State of Software Quality | Testing”
- Test Automation Coverage Dropped: Fewer organizations can claim to have automated more than three-quarters of their application and API tests. Those doing less than 75% of their tests manually rose from 24% in the 2019 study to 37% in 2021. Oftentimes, automating one test provides more time to manually test for something else.
- Lack of Time Surges as a Test Automation Challenge: More than twice as many respondents said a lack of time is the biggest challenge for test automation, up from 17% in 2019’s study. Testers are not being challenged because applications are being deployed more frequently but instead because they are being asked to test more frequently across more environments. Performance testing of APIs and web services, the UI layer and databases all become more common in 2021.
- Usability Testing Gains Prominence: Currently, 29% are doing performance testing of the UI layer, compared to just 9% in 2019. Automation of usability tests is much more common today, with 50% doing some automated testing as compared to just 34% in 2021. The rising prominence of usability is being driven by two factors. First, an increased focus on the end user by SREs. Second, an improvement in the provisioning of synthetic data for test environments. Creating synthetic data used to be the biggest challenge to automating UI tests (dropping from 18% in 2019 to 5% in 2021), but validating that appropriate use cases are being tested jumped is now by far the biggest challenge (going from 11% to 33%).
- We’re Being Cautious Analyzing the Data: 81% of the 2,092 respondents are involved in testing, which is up from 66% in the 2019 study. The percentage of participants coming from North America (41% to 25%), Internet and Web Services industries (39% to 26%) and companies with 1,000 or more employees (36% to 27%) all dropped significantly since the last study. We did not write about questions in which these changes may have had a large impact. In addition, we are not reporting on a few charts for which we found discrepancies between the current report’s data and what was in the 2019 version.
- Previously on The New Stack
Cloud Security Alliance’s “State of Cloud Security Risk, Compliance, and Misconfigurations”
- Team Alignment Needed in Enforcement: 30% of organizations have IT operations, development and security teams that are aligned regarding what their security policies are, and how to enforce them with DevSecOps. You can call this group “elite” because 56% of them have a configuration error within a day, and 54% can remediate an error in a day. The bulk of the cloud security respondents (49%) say there is agreement about what the security policies are but not about how to enforce them. Fewer of among this group can detect an error within a day (41%) and even less (24%) can remediate. For the remainder, the numbers drop further.
- Training and education are utilized by 60% to improve the resolution of misconfigurations. Forty percent also say they utilize security-verified Infrastructure-as-Code templates but probably only for limited use cases.
- VMware’s CloudHealth financed this 1,000+ person survey and report.
- Previously on The New Stack
- Shadow, Zombie and Misconfigured APIs Are a Security Issue (August 2021)
- Cloud Engineers Try Policy-as-Code to Cure Misconfiguration Woes (July 2021)
- Misconfiguration Worries Grow (April 2021)
- Culture, Vulnerabilities and Budget: Why Devs and AppSec Disagree (October 2020)
- Reality Check on Automated Security Testing (October 2018)
Sleuth and LaunchDarkly’s “Hyperdrive: A Continuous Delivery Report”
- Sleuth is one of several companies that track the DORA metrics at the DevOps report Google published last week. The core of the Google’s “Accelerate State of DevOps” reports, so it probably isn’t a coincidence that along with LaunchDarkly it published its own report based on a survey of over 200 software developers. The findings are in alignment with those we’ve read in several recent reports.
- Team Processes Not Scapegoated: Over 60% of respondents that do not use feature flags say team processes are blamed when deployments get behind schedule, but that drops to less than 20% among everyone else. The onus instead goes to executives, with more than 70% of developers blaming them for falling behind.