How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Software Testing

Trivago Reduces Software Test ‘Flakiness’ with Aggressive Monitoring

By adding monitoring practices to its testing workflow, Online travel broker Trivago has reduced the number of end-to-end tests failing from flakiness to just under 1%.
Apr 5th, 2023 12:00pm by
Featued image for: Trivago Reduces Software Test ‘Flakiness’ with Aggressive Monitoring

Test flakiness is one of the main challenges with automated testing. It happens when test failures are triggered by issues unrelated to the application being tested. It’s tricky a problem to solve: How do you troubleshoot failures that aren’t necessarily failures?

To combat the issue, online travel broker Trivago added monitoring practices to its testing workflow. As a result, it reduced the number of end-to-end tests that failed from flakiness to just under 1%.

A recent blog post written by Trivago Web Test Automation Engineer Giuseppe Donati explained it was by process of elimination, to find out what the failure is or what it isn’t.

The project team achieved a 99% average success rate on Selenium-based testing by getting incredibly organized with test reports and beefing up monitoring and alerting. A centralized test report server monitors processes, offering one-click linking to test reports, and alerting that notifies QA engineers and developers when tests fail for multiple URLs.

The Set up

Trivago iterated its Elasticsearch, Kilbana and Selenium (ELK) stack with two goals in mind: to better identify the cause of end-to-end testing failures and, more specifically, to eliminate test flakiness.

Trivago’s tech stack, detailed below, includes everything from end-to-end test execution to reporting and monitoring:

  • GitHub Actions (GHA) workflows.
  • Custom runners on Google Cloud (GCP) for test jobs.
  • Trivago’s own Cluecumber Maven plugin for test reports generation.
  • Google Could Storage (GCS) bucket for test reports storage.
  • A test framework plugin that proceeds Kafka log entries from Cluecumber JSON report files.
  • Kafka message broker.
  • Logstash for processing Kafka messages.
  • Elasticsearch for data storage.
  • Kibana for data visualization.
  • Grafana and Slack for alerting on recurring test failures.

Centralized report access was key for this approach. The tests are triggered through GitHub Actions and the jobs are executed with custom runners on GCP.

Trivago’s first attempt at this was to store the records as zipped artifacts in the GitHub Actions workflow. All the downloading, unzipping, and difficulty linking to dashboards made this approach more hassle than it was worth.

Trivago ended up storing the reports in the “test report server” in a GCS bucket in Google Cloud Storage. The pages are served as webpages through a gcs-proxy application. The server is organized by repository, workflow, and run id.

The workflow after test execution, the workflow runs like this:

  • Logging test results (from Cluecumber JSON files) to Kafka.
  • Generating a test report as an HTML page with attachments via the Cluecumber plugin.
  • Uploading the generated test report folder to the “test report server” GCS bucket.
  • Sharing the link to the test report as a GitHub states badge, comment on a PR, message to Slack, etc. depending on workflow requirements.

Dashboards Used

Kibana: Kibana provides a visual reporting tool and connects visuals to the report links for one-click-access to the actual records, with execution details, screenshots, and video recordings:

The Kibana visual reporting tool and easy access to report links.

The Kibana visual reporting tool offers access to report links.

Grafana: Grafana is used to scan for flakiness in testing overnight on the main application branch by counting base URLs. Since the base URL is tied to the PR/branch name, if a failing test is over the predetermined threshold number of URLs, an alert is sent to the appropriate people via the appropriate channels. The query looks like the image below:


The Trivago project team implemented straightforward alerting and monitoring and found success. They managed to increase their trust in test automation by providing feedback within reach.

Here are more details for Trivago’s ELK stack and Cluecumber tool.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.