Tricentis sponsored this post.
Like Lucy and Ethel struggling to keep pace at the chocolate factory from the episode in the 1950s U.S. classic “I Love Lucy,” many software testers were struggling along to keep pace with accelerated processes — then along comes the supervisor proclaiming, “You’re doing splendidly… Speed it up!”
As expectations associated with tests are changing, legacy-testing platforms just aren’t keeping up. Legacy testing platforms take a “heavy” approach to testing. They rely on brittle scripts, deliver slow end-to-end regression test execution and produce an overwhelming level of false positives. As a result, organizations have achieved limited success with test automation. The overall test-automation rate is 8% on average for enterprises. In a polling question, we asked during industry webinars and trade shows, respondents overwhelming reported that the results of test automation to date have been “So-So.”
Traditional Testing Doesn’t Work
Recent changes across the industry are demanding more from testing while making test automation even more difficult to achieve. There are several reasons for this:
- Application architectures are increasingly more distributed and complex, embracing cloud, APIs, microservices, etc. and creating virtually endless combinations of different protocols and technologies within a single business transaction.
- Thanks to agile, DevOps and continuous delivery, many applications are now released anywhere from every two weeks to thousands of times a day. As a result, the time available for test design, maintenance and especially execution decreases dramatically.
- Now that software is the primary interface to the business, an application failure is a business failure. This is true even for a seemingly minor glitch that could have severe repercussions if it impacts the user experience. As a result, application-related risks have become a primary concern for even non-technical business leaders.
Given that software testers are facing increasingly complex applications, they are expected to deliver trustworthy go/no-go decisions at the new speed of modern business. More of the same traditional testing approaches won’t get us there. We need to transform the testing process as deliberately and markedly as we’ve transformed the development process. This requires more effective test automation…and more. This requires a different approach called continuous testing.
What Is Continuous Testing?
Continuous Testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release as rapidly as possible. Continuous Testing does not require any specific type of testing approach (shift left, shift right…) or testing tools. However, it does require that:
- Actionable feedback is delivered to the right stakeholder at the right time.
- Testing occurs across all phases of the software delivery pipeline.
Test automation is designed to produce a set of pass/fail data points correlated to user stories or application requirements. Continuous Testing, on the other hand, focuses on business risk and providing insight on whether the software can be released. To achieve this shift, we need to stop asking “are we done testing” and instead concentrate on “does the release candidate have an acceptable level of business risk?”
Here are five key attributes of Continuous Testing:
- Assesses business risk coverage is its primary goal.
- Establishes a safety net that helps the team protect the user experience.
- Requires a stable test environment to be available on demand.
- Seamlessly integrates into the software delivery pipeline and DevOps toolchain.
- Delivers actionable feedback appropriate for each stage of the delivery pipeline.
Comparing Continuous Testing with Test Automation
The main differences between Continuous Testing and test automation can be grouped into three broad categories: risk, breadth and time.
Businesses today have not only exposed many of their internal applications to the end-user, they also have developed vast amounts of additional software that extends and complements those applications. For example, airlines have gone far beyond exposing their once-internal booking systems. They now let customers plan and book complete vacations, including hotels, rental cars and activities. Exposing more and more innovative functionality to the user is now a competitive differentiator — but it also increases the number, variety and complexity of potential failure points.
Large-scale “software fails” have such severe business repercussions that application-related risks are now prominent components of a business’ public financial filing. Given that notable software failures resulted in an average -4.06 percent decline in stock price (which equates to an average of negative $2.55 billion loss of market capitalization), it’s not surprising that business leaders are taking note — and expecting IT leaders to take action.
If your test cases weren’t built with business risk in mind, your test results won’t provide the insight needed to assess risks. Most tests are designed to provide low-level details on whether user stories are correctly implementing the requirements — not high-level assessments of whether a release candidate is too risky to release. Would you automatically stop a release from taking place based on test results? If not, your tests aren’t properly aligned with business risks.
To be clear: We’re not suggesting that low-granularity tests aren’t valuable; we’re stating that more is needed to stop high-risk candidates from going out into the wild.
Here are some ways that testers can address risk:
- Understand the risks associated with the complete application portfolio.
- Map risks to application components and requirements (which are then mapped to tests).
- Use a test suite that achieves the highest possible risk coverage with the least amount of test cases.
- Always report status that shows risk exposure from business, technical, performance and compliance perspectives.
Even if a business manages to steer clear of large-scale software fails that make the headlines, even seemingly minor glitches can still cause trouble these days. If any part of the user experience fails to meet expectations, you run the risk of losing that customer to a competitor. You also risk brand damage if that user decides to expose issues to social media.
Just knowing that a unit test failed or a UI test passed doesn’t tell you whether the overall user experience is impacted by recent application changes. To protect the end-user experience, run tests that are broad enough to detect when an application change inadvertently impacts functionality which users have come to rely on.
Here are some ways to address testing breadth:
- Define and execute complete end-to-end tests that exercise the application from the user’s perspective
- Provide integrated support for all the technologies involved in critical user transactions (web, mobile, message/API-layer, SAP and packaged apps, etc.).
- Simulate service virtualization for dependent components required to exercise complete end-to-end transactions that aren’t either available or configurable for repeated testing.
- Ensure that tests and service virtualization assets are populated with realistic and valid data each and every time the tests are executed.
- Performing exploratory testing to find user-experience issues that are beyond the scope of automated testing (e.g., usability issues).
Now that the speed at which organizations ship software has become a competitive differentiator, the vast majority of organizations are turning to Agile and DevOps to accelerate their delivery processes.
When automated testing emerged, it focused on testing internal systems that were built and updated according to waterfall development processes. Systems were all under the organization’s control and everything was completed and ready for testing by the time the testing phase was ready to start. Now that Agile processes are becoming the norm, testing must begin in parallel with development; otherwise, the user story is unlikely to be tested and deemed “done done” within the extremely compressed iteration time frame (often two weeks).
If your organization has adopted DevOps and is performing Continuous Delivery, software may be released hourly — or even more frequently. In this case, feedback at each stage of the process can’t just be “fast”; it must be nearly instantaneous. If quality is not a top concern for your application (e.g., if there are minimal repercussions to doing a rollback when defects are discovered in production), running some quick unit tests and smoke tests on each release might suffice. However, if the business wants to minimize the risk of faulty software reaching an end-user, you need some way to achieve the necessary level of risk coverage and testing breadth — fast.
For testing, there are several significant impacts:
- Testing must become integral to the development process (rather than a “hygiene task” tacked on when development is complete).
- Tests must be ready to run almost as soon as the related functionality is implemented.
- The organization must have a way to determine the right tests to execute at different stages of the delivery pipeline (smoke testing upon check-in, API/message layer testing after integration and end-to-end testing at the system level).
- Each set of tests must execute fast enough that it does not create a bottleneck at the associated stage of the software delivery pipeline.
- A way to stabilize the test environment is needed to prevent frequent changes from causing an overwhelming number of false positives.
Based on the criteria described above, here are some ways for testers to address time pressures:
- Identify which test cases are critical for addressing top business risks.
- Define and evolve tests as the application constantly changes.
- Rebalance the test pyramid so that most tests execute at the API layer, which is at least 100 times faster than UI test execution.
- Integrate tests into the delivery pipeline.
- Run distributed tests across multiple VMs, network computers, or in the cloud as appropriate.
- Enlist service virtualization and synthetic data generation/TDM so that testing doesn’t need to wait on data or environment provisioning.
Continuous Testing > Test Automation
If you only take one idea away from this section, we hope that it’s this:
Test automation ≠ continuous testing
Continuous testing > test automation
Even the teams that have achieved fair levels of success with traditional test-automation tools hit critical roadblocks when their organizations adopt modern architectures and delivery methods:
- They can’t create and execute realistic tests fast enough or frequently enough.
- The constant application change results in overwhelming amounts of false positives and requires a seemingly never-ending amount of test maintenance.
- They can’t provide instant insight on whether the release candidate is too risky to proceed through the delivery pipeline.
It’s important to recognize that no tool or technology can instantly “give” you continuous testing. Like agile and DevOps, continuous testing requires changes throughout people, processes and technology. However, trying to initiate the associated change in people and processes when your technology is not up to the task will be an uphill battle from the start … and ultimately a losing one.
Feature image by Tanja Schulte from Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Tricentis.