If you’re wondering about the headline, then you may not yet be a fan of Douglas Adams’ “Hitchhiker’s Guide to the Galaxy.” At one point, Marvin, the Paranoid Android, was being coerced into doing yet another task he believed he was far too wise and all-knowing to bother wasting his precious time with. One fellow promised that a whole new, exciting life awaited him, to which he responded, “Oh, no, not another one.”
Humans have a problem, and perhaps you’ve noticed it. When anything falls outside their realm of comprehension, such as the nature of the universe or the lyrics of a Steely Dan song, they pretend it falls inside and they go from there. The nature of animals, the order of the universe, the afterlife, the production of bachelor/bachelorette dating shows — humans anthropomorphize these inhuman activities so that they can face down what they have problems understanding.
Robots are a product of humans’ egocentric need to make machines address them and interact with them using a human frame of reference. When humans imagine a future of machines that are totally under human control, they project robots as humanoids. And when humans perceive machines as having gone completely out of control, humans project them as humanoids.
A Different Imitation Game
At first, it might not seem that robots belong in the same category with philosophical states of being or the search for a unified field theory, since robots are, by definition, human creations. But like any philosophy, the ideal of robots is not only a product of the human mind but a full-time resident there. When humans speak of robots in idealistic terms, or when puppets masquerading as robots are paraded before audiences at electronics shows, they’re not the armatures that weld parts to automobiles or the prosthetic legs that make wounded veterans walk upright. Inevitably, somebody sticks happy faces or even sad faces on them, with blinking eyes and false eyelashes, in hopes that future machines can pretend to emote at least as well as people do.
Historically, the parts of robots that have been most successful are the ones least resembling the brain. Artificial intelligence is the endeavor to create technology that produces results that would otherwise have required human intelligence. Of course, humans’ standards for intelligence change. A computer that predicts the weather would have seemed omniscient in the 1970s; today when a storm passes over a town and people are hurt, computers are blamed for poor predictions.
The studies of AI and robotics are actually separate, the latter concentrating on the ability to automate tasks that require motor skills, whether or not the products end up resembling humans or even anything organic. The most successful robots in modern industry may have been inspired by nature, but they aren’t beholden to it. A similar truth may be said for AI: The best machine learning concepts may have been inspired by human reason, but end up not mimicking it at all.
That doesn’t stop humans from thinking the product of these two endeavors — that have strayed so far from their nests in the name of progress — should be bonded together again, and forced to move back home and live in the basement.
IBM’s researchers have accomplished magnificent things with the Watson project. But the company could only afford to sink $1 billion into its Watson business unit after it had successfully “product-ized” it by way of winning “Jeopardy.” Although the system has a certain future ahead of it in diagnosing cancer, the way Watson became legitimized in most people’s minds was by beating Ken Jennings.
Last February, IBM entered into an agreement with Japan-based SoftBank Telecom to develop consumer applications for Watson in Japanese markets. The partners demonstrated their mutual commitment by demonstrating the pairing of SoftBank’s robot, called Pepper (above), with Watson’s cloud-based services. Theoretically, it’s a relocation of the robot’s “brain” onto a cloud platform, which at one level makes perfect sense. There’s no practical reason why a robot must wholly contain its own central operating system.
If you take that fact to its ultimate conclusion, though, since the functionality of the Watson system can be accessed from anywhere, what practical purpose does it serve for Watson to be accessed via a robot?
There remains a deep-seated need within the human psyche to need to be the center of the universe, or the center of something of great importance besides another awards show. That need manifests itself in the hopes that humankind’s greatest technological achievement will bear some resemblance to its creator.
And yet it doesn’t, for reasons that are eventually to humans’ great credit. In the 1960s, the space program gave rise to automated systems that enabled humans to travel between planetary bodies — at the time, their greatest technological achievement. In the 2000s, the space program required a way to compress information technology in smaller spaces, yet distribute it more broadly across the entire planet. The product of that endeavor was cloud computing — arguably, the greatest technological achievement, at least of that decade.
Producing functional programs for cloud platforms requires a radical rethinking of the nature of information. Virtualization gave rise to the idea that an application can be represented on a platform that’s larger than a single computer. Once applications are broken free from the binding of the processor, a newer concept — microservices — enables humans for the first time to consider individual functions as virtual computers in and of themselves.
Last week here in The New Stack, my friend and colleague Alex Williams suggested that the complexities of managing microservices will be too much for humans to be able to follow and analyze. Perhaps, he speculated, robots would be commissioned to perform the task of microservices monitoring.
I’m hoping Alex won’t kick me off the site for disagreeing so blatantly with his suggestion that the complexities of more modern systems will be eventually be masked by the efficiency of robots. As a daredevil way of spreading the risk, I’ll bring this up: Over at my other gig, the question is being asked whether human work has finally become menial enough for robots to take over most of it. And the suggestion is being made that the best way for people to avoid being rendered obsolete by robots is to become specialists in what they do — so unique as to become irreplaceable.
(Uniqueness is a trend that has been attributed to me several times throughout my career, although it hasn’t stopped me from being replaceable yet. Just not by robots.)
On one side of the equation, Alex supposes that the depth of abstraction plumbed by microservices-based systems may be too great for humans to keep up with, and that eventually robots would assume that burden. On the other side, Erika Morphy suggests that unskilled labor may become so trivial that robots will take that over, too. Should both suggestions end up being accurate, eventually the only thing left for human beings to do will be blogging.
To believe that abstraction eventually renders systems too complex for human comprehension is to overlook the great achievement of mathematics itself: the radical simplification of the most complex systematic constructs into systems that may, at some level, be intelligible. Abstraction is the way smart people tackle big problems: first by breaking them down into functional units, second by isolating those units into discrete roles, and third by pooling the resources available to those roles.
If it resembles anything in nature at all, it’s not so much humans as termites, who are capable of forming extraordinarily advanced societies with apparently none of the chaos of politics, and none of the unnecessary extravagance of awards shows.
It’s Life, Jim, But Not as We Know It
Problems that require artificial intelligence or machine learning to solve in automated fashion are not bound to robotics by contract to produce some corporeal entity with which to speak or whine or blink or weep. Science fiction requires robots because it’s difficult to award Oscars for roles like, “Disembodied Voice #3.” But the type of function for which Watson is best prepared does require a ribcage and a pair of arms and a head that tips to one side and pouts when it’s sad, for one reason only: to ensure that bloggers write about it.
Humans have always been distracted by good acting.
The systems of microservices that humans are creating only appear complex to those who expected them to appear human. Science fiction producers may think humans have gone off-script, as far as their ability to predict the future is concerned. Yet the intelligence that has emerged from humans thus far looks less like a human than it does a cloud.
The next task on humans’ busy agenda will be to reconcile themselves with the fact that this is a good thing.
Feature image via Flickr Creative Commons.