Test Gaps Can Lead to Undetected Defects
Automated tests are an industry standard for software products nowadays. In the past, it was necessary to have a tester try out functionality manually — they had to click through an application for hours or even days to ensure that everything worked according to requirements. Today, testing teams have moved to scripting all the tests, so they could run automatically every time a new version had to be released.
Today, testing times even for complex applications have decreased from weeks to minutes. But testers’ workloads haven’t gone down because the number of tests has increased. The time testers once spent clicking around in manual testing is now spent coding test suites. Humans aren’t computers, so when they are stressed or overworked, they tend to forget things. Back in the day, that might have been a click here or there; now, it could mean a test case or two.
While the jury is still out on how much test coverage a codebase needs, it’s obvious that code that hasn’t been tested has a higher chance of bugs than tested code. So, while test coverage might not be a perfect metric, it’s at least one that can be easily collected and used to get a feeling for the overall quality of your codebase.
Frequent Changes Mean More New Code
While all applications undergo code updates from time to time, new applications in particular tend to accumulate a high number of changes in a short time. Since a new application hasn’t yet acquired product-market fit, it might need to iterate heavily on features or UI design.
Frequent changes to code burden the testers, who need to get all this new code up to standard. Usually, one unit of code — be it a class or a function — needs more than one test case, so the workload of a tester can be much higher than that of the programmer who wrote the code in question.
Codebases that have many changes over time and, as a result, a high quantity of new code, present a higher potential for untested code. These so-called “test gaps” can lead to undetected defects in your codebase, and you don’t want those to land in production.
Testers create test gaps — usually inadvertently — for a variety of reasons. Sometimes it’s simply stress or being overworked that causes them to forget about a test that they intended to write. In other instances, it may be that they had to prioritize because they don’t have time to test every line of code so they took a calculated risk (perhaps using Impact Analysis (IA), discussed below) to determine which tests to write in the time available.
Regardless, test gaps affect your code quality and are likely to hurt user experience.
Change Impact Analysis
Change impact analysis (IA) is a collection of approaches that check the impact a code change has on your application. IA can focus on different aspects of software and the software creation process.
Some IA approaches try to analyze the creation process from design all the way through to implementation and testing, checking that everything from end to end is coherent. It can answer questions like: Does this implementation change still satisfy our design goal?
Other IA approaches answer more technical questions and look at aspects like the dependency graph of software. For example, if a code change would lead to increased dependencies, it might be worth reevaluating whether you can make that code work with fewer dependencies by changing some of a feature’s requirements.
Incorporating IA into your coding process is a valuable strategy that can minimize risks, reduce testing costs and time, and improve user experience by limiting bugs and errors. But with various approaches to IA, it can seem like an overwhelming task to begin implementing it.
An Ideal IA Solution
An ideal IA solution keeps track of your code change and checks if new code is covered by your tests. It checks code coverage reports and compares them to the new code.
In a high-velocity environment where new changes are pushed multiple times a day, code reviews aren’t easy. Especially with big pull requests, some changes can go unnoticed. Test Gap Analysis ensures that all your pull requests pass a specific coverage threshold before they even reach a human reviewer. This saves on review time — and time means money — by giving your team a critical head start on the process. You can be sure that the creator of the new code has successfully run the code through enough tests for it to be viable and worth further time investment.
It’s also a great help for your implementers. They can write a feature or fix a bug, and an ideal solution will tell them which parts of their code still need tests. A heads-up from the solution transforms the cognitive burden of figuring out untested code by moving the process from the programmer to the push of a button. Programmers are free to focus on other tasks and human error is removed from this part of the testing process.
And the benefits don’t end with code coverage. After your developers write the required tests, you can use this solution to discover much-needed insights and to debug capabilities for those tests. It lets you monitor your GitHub Actions and helps you to detect tests that are slow, flaky, or have other problems that might need fixing. It also integrates with well-known tools like Jira Service Management and ServiceNow, so you can keep using the tools you already know, but with even better end results.
An ideal solution allows you to feel confident that all of your code is tested and every test works correctly — and you’re not wasting precious time dealing with lousy test performance.
Automated tests are a crucial part of software development and maintenance. This is particularly relevant for high-velocity projects wherein the codebase is subject to dozens of daily changes because every change is a potential source of errors.
Coverage metrics can help to ensure that test cases for critical code paths aren’t forgotten and that your code quality is up to the high standard that your customers have come to expect.
Foresight Test Gap Analysis checks code coverage every time someone submits a pull request to your repositories so you can keep your codebase test gap-free, and your customer experience optimized.