Modal Title
Data Science / Machine Learning

AI Is Best Supporting Human Decision-Making — Not Replacing It

A key lesson about AI development: Systems are only as good as their training.
Feb 4th, 2022 10:00am by
Featued image for: AI Is Best Supporting Human Decision-Making — Not Replacing It
Feature image via Pixabay.

Ken Seier
Ken Seier, national practice lead for data & AI at Insight Enterprises, the global integrator of Insight Intelligent Technology Solutions for organizations of all sizes, is a veteran data and AI leader with a proven ability to drive analytics success. He is passionate about aligning innovation with measurable outcomes and real business value and is able to start fast and build products, programs and teams that generate visible wins and shorten time to value. He and his team are responsible for billions of dollars of revenue and savings through analytics initiatives and programs.

Artificial intelligence (AI) has become a buzzword in popular culture and in business, a catchall for smart computers and machinery. Even just the term AI evokes a range of imagery, from Siri to Skynet. While AI isn’t synonymous with human-machine competition (Skynet notwithstanding), there have been notable examples of AI-enabled machines schooling humans in arenas as diverse as trivia to video games. While it may sting our collective ego to revisit past defeats, we can learn important lessons about the future of AI development by revisiting one of our first losses to machines — at chess.

Of course, these AI-enabled machines are not as “smart” as humans, even if they did beat us. In modern times, we still don’t have the capability to create “strong” AI, or a broad human-like intelligence capable of making detailed inferences and connections independently. All our current AI systems are “weak” or “narrow” AI, meaning they are focused on performing a specific task and rely on human interference to define learning and training parameters. 

Don’t be fooled by the label — “weak” AI is still incredibly complex and agile. Everything from self-driving cars to Amazon’s recommended purchases can be classified as “weak” AI. Plus, it beat us at our own games, so it can’t be that weak! It is because of this strength that we benefit from looking back at this series of chess wins and losses during the nascent beginnings of modern-day AI.

Checkmate

Long before Watson was crushing the competition in Jeopardy!, a team at IBM was focused on winning the Fredkin Prize: $100,000 by Carnegie Mellon University to be awarded the first time a computer beat a world chess champion. 

To do this, IBM set their sights on chess grandmaster and World Chess Champion from 1984-2005 Garry Kasparov, challenging him to three highly publicized and famous matches against IBM computers. Kasparov won the first two matches easily against IBM’s “Deep Thought” in 1989 and the first iteration of IBM’s “Deep Blue” in 1996. But in a shocking upset, he lost in the crucial tie-breaking game against an upgraded Deep Blue in 1997. 

AI is at its best when it’s supporting human decision-making — not replacing it.

During the match, Kasparov believed he was exploiting a known fact about computers at the time — they would not sacrifice a piece (i.e., exchange a piece of higher value for a better position). Yet surprisingly, Deep Blue sacrificed a knight, throwing Kasparov’s plan of attack into question. A visibly rattled Kasparov resigned after only 19 moves and historically lost the match.

Kasparov later learned that earlier on the day of the match, the team of chess grandmasters working with IBM programed this sacrificial move into Deep Blue’s “opening book” (a series of theoretically optimal and known moves to begin a game). The core of this “man vs. machine” story is really “man vs. man.” Deep Blue the machine would never make this sacrifice. Only after intervention from human “teachers” would it consider the martial compensation worth the sacrifice. 

What Does This Mean for AI?

In that single knight sacrifice lies a key lesson about AI development: Systems are only as good as their training. The IBM team knew they had to beat a grandmaster at the top of his game and focused the Deep Blue’s training on countering him. Like all AI, the effectiveness of Deep Blue’s model was heavily dependent on the engineers and developers.

This small anecdote also illuminates another important aspect of responsible AI development: always factor in the human loop. I have written in the past that AI is at its best when it’s supporting human decision-making — not replacing it. The algorithms that drive AI are incredibly powerful and complex but are missing fundamental human cognitive patterns that give us an edge. That’s why Deep Blue can beat Kasparov at chess, but only after a nudge in the right direction from its trainer. As AI continues to develop into more complex systems, we must ensure that humans play a regular role in the decision-making process.

When accepting the Fredkin Prize, one of Deep Blue’s principal designers Feng Hsu said: “Some people are apprehensive about what the future can bring. But it’s important to remember that a computer is a tool. The fact that a computer won is not bad.” 

I agree with his sentiment, and I imagine the many people and businesses that rely on AI for greater security, efficiency and convenience would also agree. The simple fact that your smartphone could obliterate Deep Blue in any game is a testament to how far AI has advanced and integrated into our everyday lives.

As we continue to develop our AI capabilities, we must remember that it’s a deeply human endeavor. Modern machines are only as powerful as the human ingenuity behind them. It’s up to us to identify AI’s weaknesses, blind spots, and provide the guiding hand to achieve its full potential (and our own).

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.