Modal Title
DevOps Tools / Software Development

Insights on Integration Tests with Foresight

An integration test should use real dependencies because the point is to check the integration between them.
Jan 23rd, 2023 11:30am by
Featued image for: Insights on Integration Tests with Foresight

As a skilled and diligent software engineer, you know how important tests are. Research shows that the cost of fixing a defect grows exponentially as code shifts left along its life cycle. But increased costs aren’t the only issue. The tests become more complex too.

Compare a simple assert in a unit test with that of an error-prone manual test. A unit test takes a few milliseconds, but manual checks could take minutes. For this reason, the testing pyramid was invented. The idea is simple: Use more inexpensive tests and fewer expensive ones.

In this blog post, we’ll talk about the middle layer of the pyramid — integration tests. We’ll also demonstrate how Foresight can help you spot issues in integration tests with real services — no mocks!

Integration tests are perfectly balanced between price, speed and coverage. Because integration tests are written by developers, oftentimes using the same tech stack as the rest of the project, they’re easy to integrate into the SDLC, cheap to automate and cover relatively large portions of the functionality.

However, testers and QA engineers say that fewer than half of their projects have them.

Issues with Integration Testing

Why do so few teams use integration tests?

One likely reason is that making a good integration test requires much more effort than making a unit test. But the real problem is all the dependencies, not the tests themselves.

Software never runs alone. It uses a database to store its data, and it communicates with other software. So when you go beyond unit testing, you have to provide these dependencies to the system under testing. Often developers will use mocks for the integration tests because they’re faster, easier to configure and more reliable. But using mocks contradicts the very idea of integration testing. An integration test should use real dependencies because the point is to check the integration between them.

Historically, the first approach was to run the test in a controllable, dedicated environment. As you can imagine, this doesn’t work well for big projects. Different teams might deploy different versions of the application and its data into the testing environment, and sharing versions almost always leads to conflict.

To get around this, you may try to manage all of the dependencies for the test manually. But your setup phase will rapidly outgrow the test and become a nightmare to maintain and keep in sync with the real world.

Eventually, tools like Docker, Docker Compose and even Kubernetes were developed to simplify the setup of integration testing. Today, these are generally considered the de facto tools for integration test isolation.

Those solutions are generic and work with any technology. But Testcontainers, a technology beloved by many Java developers, takes things a step further. It’s basically Docker but wrapped into a convenient API.

And yet, even with top-notch tools like Testcontainers, it’s easy to screw up integration tests. A single flaw in the configuration could lead to abnormal test execution times or flaky tests.

Instead of blaming developers and leaving them to find their own bugs, Thundra has developed tooling that provides improved test observability. Let’s check it out.

Integration Tests Insights with Foresight

Imagine a simple quotes API. It stores quotes in Redis for simplicity and speed. It’s a very simple Spring Boot application, consisting of five classes, including a configuration, an entity and a Spring Data interface.

This means only two of the classes contain the code: the service and the controller. The project is configured to run some checks with GitHub Actions.

Figure 2: Demo project architecture

The code for this demo is available on GitHub.

Now, we want to test the persistence logic, and the application should be able to save quotes into Redis. Of course, we could just mock the QuotesRepository, but, as we’ve stated previously, it wouldn’t be a good integration test. It would be a unit test.

Here’s a sample of what the integration test could look like:


Here, we’re just saving quotes 10 times and doing some checks. A more sophisticated test should check for the corner cases, but for the sake of simplicity, all the data here differ only by the index.

To do that, follow the docs. Below are the YAML scripts in the GitHub workflow file for collecting workflow and test metrics.


As you see, Foresight provides you with a reusable GitHub Action that will send execution statistics to the Foresight backend.

Now, if you trigger a build, you’ll see that even this single test takes about a minute to finish. If you try to bump up the number of tests (by changing .range(0, 10) to .range(0, 1000)), you’ll notice that the execution time grows linearly, and you’ll have to wait about 10 minutes. Obviously, it doesn’t scale.

Let’s take a deeper look at the execution time breakdown:

Figure 4: Test execution times breakdown in Foresight

We see the execution time was 18 seconds for just 10 tests. It should take no more than five seconds, because we use Testcontainers to interact with the internal Redis database in Testcontainers, not calling the remote Redis database. Looking at the logs for processes for each step in the workflow, we see that the Redis server is initiated several times during the test execution.

Figure 5: Redis servers are initialized several times during the tests

It would be better to cache the container between executions, and Testcontainers supports that! They even have a few strategies for doing so, but the simplest one is to add a static modifier to the container field:


Commit the change, and the workflow will be triggered automatically. The drilldown is now completely different from the first time, and the total time for running the test is now only two seconds.

Checking the recent change code from “Change impact analysis” in Foresight, we see the new code for initiating Redis is now using the “private static final” variable instead of the “private final” variable used previously.

Now the Redis server is initiated only once during the tests, which significantly reduces the tests’ execution time.

Figure 8: Redis server is initiated only once

A skeptical reader may ask why we don’t just rely on the logs. All the numbers are already there. But that’s simply not true.

Logs may give you some timestamps, and you could do the calculations yourself, but execution drill-down is not usually logged. It may be accessed from the test reports, but first, those reports should be explicitly enabled, then uploaded, and finally, aggregated. It bears mentioning that every testing framework has its own report format. Foresight does all the heavy lifting for you, plus it unifies the representation. Some problems are much easier to solve when you have all the data right in front of you.

There is no better way to learn than by trying it yourself. Create a free Foresight account, clone the repository, configure the integration (basically, set your own keys in the repo secrets), and play with different options.

Summary

Integration testing is a complicated and nuanced topic. The complexity is inherent to the nature of an integration test. It’s tempting to turn them back into unit tests, but you’ll know you’re not writing good tests, and your applications will show it.

With metrics and deep insights into your CI pipelines, Foresight gives developers confidence in their tests.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.