How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Tech Life

AI: Short-Term Overhype, but Underhyped for the Long Haul

Solving the communications problem when it comes to AI is a far more complex undertaking. But I firmly believe that it is possible.
Jun 3rd, 2022 10:00am by
Featued image for: AI: Short-Term Overhype, but Underhyped for the Long Haul
Feature image via Pixabay.

Claus Topholt
Claus Topholt is co-founder and chief product officer at Leapwork, a no-code test automation company.

There is no doubt about it: Artificial intelligence technology has the potential to open up a whole new world of exciting opportunities. Obviously, AI has already been successfully applied to a dizzying array of use cases, from chatbots and natural language processing (NLP), to conquering Go and Bridge and automating many aspects of IT.

But that’s more a function of the exponential growth of computing power rather than new breakthroughs in core AI technologies. Currently, at its core, AI is basically advanced statistical data analysis, and the mathematical models haven’t changed much in the last two to three decades. For instance, when we refer to artificial neural networks, the technology sounds like it works like the neurons on the human brain. But that is very much not the case — in reality, it’s more accurate to describe neural networks as sequential matrix multiplications.

Because AI sounds magical, we vastly overestimate its potential in the short term. More computing power alone won’t address the key challenges that prevent AI from, for example, enabling truly autonomous software testing. But at the same time, once we do achieve the necessary breakthroughs — which will take some time — most people will have underestimated the impact that AI will have on IT and the world at large.

Specifically, the challenge that I see that most limits the capabilities of AI is one of human to AI communication. It should be possible to tell a computer what it is you want it to do without having to explain in any technical detail how to do it. Essentially, we need to be able to give an AI project the requirements for a task, and then the AI can handle the rest.

Declarative vs. Imperative Models of Communication with Machines

To illustrate, I’ll use an example from my own industry: AI and quality assurance in software testing. Despite a great deal of automation in the development of software, testing is still a very manual process. In fact, a recent GitLab survey of more than 4,000 developers found that testing is the No.1 reason for release delays.

As one respondent to the survey succinctly said: “Testing delays everything.” There’s a lot of talk in the industry about fully automated QA using AI, meaning that a tester simply feeds the requirements to the AI and the platform takes over all testing from there.

We need to be able to communicate with AI in a declarative rather than imperative format.

That’s not going to happen any time soon because, currently, testers lack the ability to communicate to the AI what the software is expected to do without having to delve into code and highly technical configuration, except in discrete situations. In other words, for a specific test, it’s possible to do this in a visual, flow-chart manner with a no-code platform. But it’s well beyond current technological capabilities to teach an AI the requirements for a software product. But, if we could break down that language barrier, then the AI would be able to test the software by itself.

Here’s a good way to think about the problem.

We need to be able to communicate with AI in a declarative rather than imperative format. For example, a declarative way of communicating with AI would be “I want to click that button,” and the AI would perform the action.

If we did so in an imperative manner, which is largely how we have to do this today, the language would need to be extremely technical: “I want to find the button in the HTML page using XYZ statement and perform an action called the left mouse click using this JavaScript.”

To use an analogy, it’s like the difference between a traditional car and a self-driving one. The normal car is imperative in nature. The driver must turn the wheel, apply the brakes, operate the gas pedal and much, much more to arrive at the store. A true self-driving car, on the other hand, is declarative. The driver simply says, “I want to go to city hall,” and the car takes care of the rest.

Teaching Testers to Code — a Failed Experiment

I first recognized this problem when I was working at an investment bank, where I covered system architecture, continuous delivery, live systems troubleshooting and performance optimization of their social trading platform.

Testing was absolutely vital because the bank depended on high-volume rapid trading, and poor software quality ran the risk of quite literally bankrupting the institution. We wanted to speed up the testing process without compromising quality. I thought a potential solution could be to make a simplified programming language to build tests so testers could set them up on their own, without having to involve programmers.

Our testers were highly skilled, and they were subject matter experts with a deep understanding of the complexity of our software. But their expertise wasn’t coding. Forcing programming on to them was not the right approach.

If I was going to help them build their own tests, I’d have to empower them to do so in a more intuitive way. Then, one day, we were using a whiteboard to draw a flow chart, and it was such an incredibly clear way of expressing something with very complex dependencies. That, I saw, was the difference between an imperative and a declarative model for communicating with computer systems.

Solving the communications problem when it comes to AI, however, is a far more complex undertaking. But I firmly believe that it is possible. And it seems that model-based testing may help us solve part of the puzzle. In model-based testing, the requirements for what software is supposed to do are expressed in a digital twin of the software you’re testing. But the model can’t be built on nothing — it needs requirements, and it’s difficult to describe those requirements with an AI or machine learning platform when you’re working with abstract concepts.

Solving this problem will require a great deal of research and innovation, but I firmly believe it will be the future of software testing.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.