Favorite Social Media Timesink
When you take a break from work, where are you going?
Video clips on TikTok/YouTube
X, Bluesky, Mastodon et al...
Web surfing
I do not get distracted by petty amusements
Cloud Native Ecosystem / DevOps / Software Development

The Journey to Fully Autonomous Testing: Are We There Yet?

The future of fully autonomous testing is bright. Here's how to cut down on the busywork quality engineers are facing today.
Sep 1st, 2021 7:28am by
Featued image for: The Journey to Fully Autonomous Testing: Are We There Yet?
Featured image via Pixabay

I’m often asked if and when fully autonomous testing could become a reality. That’s a topic I love discussing, but before delving into that, let’s take a closer look at the two words that make up that term.

Autonomous, meaning “without human intervention,” is pretty simple. “Testing is more difficult, however. The investigative, inquisitive nature of testing does not lend itself to automation. What I am about to describe is best categorized as “autonomous checking.” With that in mind, let’s continue.

Describing Becomes Deciding

David Colwell
David has been in the testing industry for over a decade. He has worked as technical test lead and head of QA at DX Solutions, a two-time BRW Fast 100 company, as well as in a variety of other roles and technical roles at Tricentis. He is currently vice president, AI & Machine Learning at Tricentis.

With advanced tooling like vision-based test automation and other intelligent automation engines, the problems of automated checking have shifted from “How do I reliably automate this interface?” to higher-level problems. Humans are still overwhelmingly responsible for creating the automated checks: describing what inputs to fill in, what buttons to click, etc. This is the first horizon.

The shift to autonomy is best defined as “describing becomes deciding.” With approaches such as smart impact analysis, this is already the case. You don’t need to describe which tests to run; you just need to decide if the tool’s recommendations suit your needs. This is great in closed systems such as SAP, Salesforce and ServiceNow where these offerings shine. With the help of artificial intelligence, this trend will expand well beyond this into the realm of bespoke/custom applications.

So, great! Does this mean getting greenlighting printouts from the machine is the future? Is that true automation?! Well, not so fast. You see, these closed systems not only have defined processes; they also have defined outcomes (the oracle). Not so with bespoke applications. While determining the actions to take is possible generically by examining people taking these actions, it’s not always possible to extract the “why” component. When a user executes a transaction, their eyes flick to the top of the screen to double-check that the “amount” value is correct. This validation is not captured, and so the automated process misses the point of the check, which was to determine not only that the transaction was processed, but that it was processed correctly.

Our Autonomous Future

This is not a bleak outlook, however. While fully autonomous checking might still be quite a way off, the trend of “describing becomes deciding” will remove a ton of busywork that bogs down quality engineers today. Parsing through the outputted scenarios, injecting validation and deciding which to run is a much more pleasant job than worrying about why the Login button doesn’t have a stable ID field.

With that said, there are a few things to watch out for:

  1. Beware of Test Case Spam

If you embark on an autonomous testing endeavor, and your team comes back with a tool or process that generates thousands of tests, beware. You still need to parse through these tests, inject validations and debug them if they “fail.” The motto of “fewer, targeted tests” has been a good guide for the past 20 years, and it remains so now.

  1. Investigate the How

When you are told that your tests can be automatically generated, dig a bit into how this happens. AI is not magic. If something appears to be magical, it is most likely a fabrication. Your team should be able to tell you that the process examines usage patterns, parses existing (accurate) definitions or has some other source of how it defines the test. “Shaking up the app and generating tests from it” is still firmly in the world of magical thinking.

  1. Ask About Maintenance

Tests that fail must be investigated, updated or discarded. Having a thousand tests is like having a thousand smoke detectors. If you own an entire high-rise apartment building, that’s probably justified. If you own a house, then you will spend two hours switching them all off when you burn the toast. Inquire about the nature of this method to ascertain whether autonomy will actually save you time in the long run.

Despite this, the future of autonomous checking appears to be very bright. At Tricentis, our goal is to devise a method for generating the best — and fewest — tests necessary to achieve the desired level of assurance. We look forward to continuing on this journey.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Tricentis.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.