Defining AI: Add Machine Learning into Your Production Environment
Over the past several years, there’s been a great deal of discussion and debate about Artificial Intelligence (AI). Decades of research has led to the commercialization of products that incorporate some form of AI, but the purpose, utility and quality of AI used varies greatly. AI-washing is selling products or services as containing AI, when all they have are inflexible, “fixed intelligence” algorithms. So, what separates a computer algorithm from AI?
Beyond ‘Just a Computer’
In the real world, we want AI to perform tasks that traditionally require human judgment to accomplish. This is fundamentally the same reason we invented computers and then wrote simple algorithms for computers, such as parsing spreadsheet inputs. AI solves far more complex problems that require coping with the unknown.
If an algorithm can only cope with a fixed number of pre-programmed, expected cases — essentially, it is nothing more than a glorified series of “if-then” statements — then it probably doesn’t meet the modern bar to be considered AI. If, however, the algorithm has at least some limited capacity to make (potentially) optimal decisions based on unknown and new information, then we’re probably safe to call it AI.
A good way to judge the value that AI brings to a product is to ask yourself, “what kind of skills and time would be needed for my organization to develop this capability on our own?”
Fifteen years ago, autonomous robots were fantasy. Today, you can buy robots with onboard computers packed with various algorithms that sense the environment, make complicated decisions about what to do with that information and then act on that information.
This same ability to cope with the unknown and unexpected is the major focus of most types of AI, from finance to information security. These AI systems are no longer confined to the lab, (or robots) and are increasingly found assisting IT operations more generally.
For example, machine learning can be used as part of malware detonation and examination in a sandbox environment. Traditional approaches would only be able to determine that an executable is a risk if it attempting to do something that is known to be damaging. Machine learning can help determine if previously unencountered actions are likely to be a threat.
AI-Enhanced IT Operations Through Anomaly Detection
Anomaly detection lives on the border between simpler technologies and “proper” AI. Anomaly detection involves a computer looking at a dataset for patterns — often a dataset far too large for any human to manually review. Some kinds of anomaly detection can be done without AI but incorporating machine learning into anomaly detection is increasingly common.
Simple anomaly detection can be built with a bash script. Complicated anomaly detection involves machine learning and may involve other computer science approaches to AI, as well. Anomaly detection is absolutely critical to today’s IT operations because it is impossible for any human to process the telemetry (for example: log files, user behavior, binary executables) that comes off of even a single computer in real-time, let alone data from organizations with thousands of employees.
This brings us to machine learning. Machine learning is the typical boundary where even people who hate the term AI will reluctantly agree that the algorithm in question probably counts as fitting today’s buzzword interpretation of “AI.”
AI-enhanced IT Operations (AIOps) is an emerging discipline that aims to do exactly this. AIOps uses build machine to process the input from one or more anomaly detection algorithms in order to quickly determine the root cause of problems with IT infrastructure. The same technologies used for AIOps are useful to security teams as well, as machine learning can comb through data lakes of infrastructure and firewall logs to identify threats that humans would miss.
Mind the AI Gap
So, how do you go about adding AI to your infrastructure? Given the AI skills shortage, “build your own AI to run your infrastructure,” isn’t feasible for most organizations. How do we judge the value of the AI component of a product or service separately?
A good way to judge the value that AI brings to a product is to ask yourself, “what kind of skills and time would be needed for my organization to develop this capability on our own?” If you pulled that AI out of the product, how much would its value decrease and could you build — and train with an equivalent dataset — that capability with only the people you have today?
There’s nothing magical about AI. Even by a modern, reasonably prescriptive definition, AI is a real, practical tool that is being used everywhere today. It is an increasingly critical tool in information security, if for no other reason than that many threat actors are quite good at staying hidden, and AI can help uncover those threat actors by finding anomaly correlations in quantities of data too vast for any human to parse. AI isn’t magic, but it is a useful tool.
Feature image via Pixabay.