Trivago Reduces Software Test ‘Flakiness’ with Aggressive Monitoring
Test flakiness is one of the main challenges with automated testing. It happens when test failures are triggered by issues unrelated to the application being tested. It’s tricky a problem to solve: How do you troubleshoot failures that aren’t necessarily failures?
To combat the issue, online travel broker Trivago added monitoring practices to its testing workflow. As a result, it reduced the number of end-to-end tests that failed from flakiness to just under 1%.
The project team achieved a 99% average success rate on Selenium-based testing by getting incredibly organized with test reports and beefing up monitoring and alerting. A centralized test report server monitors processes, offering one-click linking to test reports, and alerting that notifies QA engineers and developers when tests fail for multiple URLs.
The Set up
Trivago iterated its Elasticsearch, Kilbana and Selenium (ELK) stack with two goals in mind: to better identify the cause of end-to-end testing failures and, more specifically, to eliminate test flakiness.
Trivago’s tech stack, detailed below, includes everything from end-to-end test execution to reporting and monitoring:
- GitHub Actions (GHA) workflows.
- Custom runners on Google Cloud (GCP) for test jobs.
- Trivago’s own Cluecumber Maven plugin for test reports generation.
- Google Could Storage (GCS) bucket for test reports storage.
- A test framework plugin that proceeds Kafka log entries from Cluecumber JSON report files.
- Kafka message broker.
- Logstash for processing Kafka messages.
- Elasticsearch for data storage.
- Kibana for data visualization.
- Grafana and Slack for alerting on recurring test failures.
Centralized report access was key for this approach. The tests are triggered through GitHub Actions and the jobs are executed with custom runners on GCP.
Trivago’s first attempt at this was to store the records as zipped artifacts in the GitHub Actions workflow. All the downloading, unzipping, and difficulty linking to dashboards made this approach more hassle than it was worth.
Trivago ended up storing the reports in the “test report server” in a GCS bucket in Google Cloud Storage. The pages are served as webpages through a gcs-proxy application. The server is organized by repository, workflow, and run id.
The workflow after test execution, the workflow runs like this:
- Logging test results (from Cluecumber JSON files) to Kafka.
- Generating a test report as an HTML page with attachments via the Cluecumber plugin.
- Uploading the generated test report folder to the “test report server” GCS bucket.
- Sharing the link to the test report as a GitHub states badge, comment on a PR, message to Slack, etc. depending on workflow requirements.
Kibana: Kibana provides a visual reporting tool and connects visuals to the report links for one-click-access to the actual records, with execution details, screenshots, and video recordings:
Grafana: Grafana is used to scan for flakiness in testing overnight on the main application branch by counting base URLs. Since the base URL is tied to the PR/branch name, if a failing test is over the predetermined threshold number of URLs, an alert is sent to the appropriate people via the appropriate channels. The query looks like the image below:
The Trivago project team implemented straightforward alerting and monitoring and found success. They managed to increase their trust in test automation by providing feedback within reach.