IBM: If the Customer Thinks It’s AI, Then It Is
A few decades ago, I established a baseline definition for “artificial intelligence” that I presumed could scale itself through the ages: any set of tools or resources capable of producing results or behaviors that would appear to require intelligence. I thought about this very carefully. I originally had “programs” instead of “tools or resources,” until I realized that some things that expressed AI behaviors may not be produced by people. I had “a result,” until I realized that in order for folks to believe something to appear intelligent, it needed to display multiple results. And then I added “or behaviors” since I realized not every product of AI needed to appear mathematically derived.
My fear was that I would end up with such a generalized definition that it would end up sounding about as simultaneously irrational and correct as Justice Potter Stewart’s famous definition of pornography, as something he would “not attempt further to define … but I know it when I see it.”
Does a result that requires deep calculation qualify as a product of intelligence? Our willingness as a species to accept anything that calculates fast or processes data fast as an “electronic brain” dates back to the earliest days of the desktop calculator. Today, it would be hard for anyone to correlate a device with less than 2K of ROM with a center of intelligence. But then again, we don’t have much trouble with awarding a device that defeats human contestants in a trivia game the grand prize of a personal pronoun.
I’ll Take Analytics for $1000, Alex
Watson has been IBM’s brand for business services that appear to require intelligence, named after the set of AI programs running on IBM’s supercomputers. Earlier this month, IBM added Tradeoff Analytics to its Watson portfolio — a service enabling developers to build applications that utilize decision support.
IBM could have named it “Bluemix Tradeoff Analytics,” because the service is actually hosted on its Bluemix consumer cloud. But it chose otherwise.
“Tradeoff analytics very much fits the world of traditional analytics, in the sense that you provide us a semi-structured data format when you’re placing it inside the service,” explained Watson Platform Director Vince Padua, in an interview with The New Stack. “It can then provide you JSON or text-style responses, as well as a widget to provide visualization around it.
“But the element that brings it into the world of Watson and cognitive,” he continued, “is not so much the algorithms that it provides and that it uses. It’s more of this idea of discovering non-obvious facts. In analytics, you’re typically looking for an answer. In the world of cognitive and machine learning (ML) and AI, you’re looking more for insights and discoveries of things you didn’t even know or see before.”
The typical analytics problem, said Padua, can be likened to seeking a needle in a haystack. Where the problem ceases to be a calculator task and enters the realm of the “electronic brain” is when the analysis of the problem yields more results, and thus more discoveries about the nature of the problem at hand, than just the end solution.
Actors and Their Roles
In IBM’s case, a tradeoff analytics problem is presented as a kind of spreadsheet where several attributes may be expressed as values along a linear scale. In an optimum world, certain of these attributes would be ideal, and certain others to be avoided. Say, for instance, you operated a regional wireless provider and you had a limited number of cells-on-wheels (COWs) to cover an outdoor event that spans several acres. You might have parameters on signal penetration for given points in the territory, a few of which may be optimum. But you’d also have estimates of how much time it would take for transmitters to be deployed there, how much electricity would be required to run them, and the difficulty of the terrain.
The “tradeoffs” would come by way of visualizing how much you may be willing to sacrifice in one area to gain ground towards one or more of your stated objectives. IBM’s Watson Tradeoff Analytics service includes a visualization widget, which you can test for yourself with simulated problem data. Padua described it as a graphical plot representing potential solutions to a problem, and where they fall relative to the constraints of the problem. The constraints can be tuned, and the widget responds by automatically re-aligning the nodes representing solutions.
It’s not the first tradeoff analytics service ever made available by a long shot. But when competitors offer it, they don’t characterize it as AI.
“Traditional analytics, in many ways, has focused on structured data, very large volumes, and relatively simple algorithms to gather insights from that information,” said Padua. “In the world of AI and ML and cognitive, maybe it’s a smaller subset of data, but it’s typically unstructured, that applies more complex algorithms to it. From my point of view, we’re seeing an evolution towards a blending of those two things — the worlds of structured and unstructured data. Whether you call something ‘analytics’ or whether you call something ‘machine learning’ or ‘artificial intelligence,’ the reality is, our clients are not really looking at them as two separate things. They’re looking at their data as though it’s data — whether it is structured or unstructured. So whether Tradeoff Analytics and the branding of Watson, we look at this world now as bringing together the elements of structured and unstructured information.”
It’s a plea from IBM for you to look at the world the way you once did, when mechanical devices seemed to do intelligent things. If you think it’s a bit of a stretch, ask yourself, is there any other way for IBM or anyone else to create competitive advantage around math as a service?
Feature image: “Jeopardy – Watson vs Lotuspherians – Lotusphere 2011” by Paul Hudson is licensed under CC BY 2.0.