You’d be forgiven for believing that AI debugging already exists, because so many companies are claiming to have artificial intelligence powering their monitoring or observability products. In this, those monitoring companies are no different from the thousands of technology vendors making somewhat dubious claims about AI.
All that said, there genuinely has been an explosion of something that feels a bit more like AI; underlying technologies like deep learning are driving innovation in computer vision, object identification, natural language processing, voice recognition and artificial photogeneration. These capabilities power healthcare solutions that can spot cancers or bleeds in patient scans, autonomous vehicles, virtual assistants and chatbots. But can they be used to find and fix problems in code?
So what can “AI” in its various forms do today when it comes to debugging? The most common form of “AI” in various monitoring and exception-management products is exception and anomaly detection; basically, noticing when something weird happens.
This isn’t just setting alerts when the CPU is suddenly at 99% utilization for half an hour. To even minimally qualify as AI, the system has to be able to learn what normal usage looks like and then make decisions about when something isn’t normal. For example, a Machine Learning-powered system can learn that a typical request database takes five milliseconds and that occasionally a complicated query can take 500ms; if the system starts to see that the database is taking 300ms for simple queries on a regular basis, it can alert people that the system isn’t working as expected.
The value that Machine Learning could add here is especially in spotting patterns in the anomalies. Perhaps the system crashes exactly two hours after the memory usage is at exactly 72%, but only on Wednesdays. Humans would struggle to notice that pattern, but the right sort of AI tool could make the connection.
Finding anomalies and predicting exceptions are key parts of resolving issues in modern distributed systems, but often when we talk about “debugging” we really mean fixing mistakes in the code itself — and beyond the limited syntax detection that some IDEs already provide.
Machine learning could be a path to an AI debugger if it’s fed enough data to train it. Perhaps if an AI model was trained on a large enough sample of code with marked, identified bugs and their fixes, then maybe it could highlight suspected buggy code from past experience, particularly for common bugs.
Bugs, though, are inherently weird. There will always be uncommon bugs too. Innovative but unusual code shouldn’t be flagged as bad, buggy code. To do all of this unaided, a future AI debugger would have to independently understand the code itself.
There’s a debate about whether any of today’s machine learning systems are real “AI.” Some theorists argue that deep learning is solving a very narrow class of problems, but that it will never be a path to Artificial General Intelligence, the sort of AI that can show reason, thought and independent learning.
To solve bugs like a human, an AI will need to have a human’s general reasoning, creativity and insight. In other words, finding and fixing the uncommon bugs will take Artificial General Intelligence or something very close to it.
Imagine that a programmer explained to an AI system exactly what some code should be doing in every possible case. The AI could then automatically test all possible inputs, understand when something is considered to be an unwanted behavior, and use evolutionary algorithms to modify the code until it (hopefully) behaves as expected every time.
However, if the programmer has to tell the AI how the code should work in every possible instance, our “AI debugger” becomes some sort of meta-compiler and the programmer’s descriptions become just a higher-level form of coding.
Today’s debugging AI might be mostly anomaly detection that’s focused on system performance rather than an understanding of the code. GitHub’s new AI-driven recommendation of which issues should be fixed first in a repository, and IDEs that are beginning to incorporate whole-line completion, are other examples of machine learning creeping into developers’ workflows.
Tomorrow’s AI should be able to go further, making a best guess at the root cause of common bugs from masses of training data and real-world experience of both the code and live systems.
But when we reach the point that Artificial General Intelligence can truly debug your arbitrary code, then AI will probably be writing all code in the first place.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real, Bit.