TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Operations

DeepMind’s New Milestones on the Road to Artificial General Intelligence

Dec 16th, 2018 6:00am by
Featued image for: DeepMind’s New Milestones on the Road to Artificial General Intelligence

This month researchers revealed that a computer had taught itself how to beat everyone else in chess — in less than 24 hours. As our computing power continues to improve, it’s now prompting technology watchers to ask: Are we on the cusp of major breakthroughs in artificial intelligence?

DeepMind headquarters in LondonThis month the world’s eyes turned to London-based AI company DeepMind Technologies (acquired by Google for half a billion dollars in 2014). They’re already the creators of the program which beat the world’s best player in Go. (The company likes to point out that Go is “one of the most complex and intuitive games ever devised, with more positions than there are atoms in the universe.”) But DeepMind’s ambitions go much, much further. DeepMind has said that its goal is to “solve intelligence” (and then use the findings to improve the world in crucial areas like health and energy). And its ultimate goal seems to be Artificial General Intelligence, or “strong AI”— the ability to perform as well as a human at a wide variety of tasks. “We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made.” They believe AI could eventually offer “a multiplier for human ingenuity” — and toward this grand goal, our simple human games are considered “a useful training ground.”

Behind this push for artificial intelligence are some powerful human brains. One of the company’s co-founders is 42-year-old Demis Hassabis, who had become a child chess prodigy by the age of 13. Soon he was working in the video game industry, but by 2009 he’d earned a Ph.D. in neuroscience, and in 2010 co-founded DeepMind to apply what he’d learned. By 2015 the company’s researchers had published an article about a system that “combines Deep Neural Networks with Reinforcement Learning at scale.” Turned loose on 49 different Atari video games, it mastered them “to superhuman level,” which the company called “the first demonstration of a general-purpose agent that is able to continually adapt its behavior without any human intervention, a major technical step forward in the quest for general AI.”

Advancing By Algorithm

But that’s also what’s remarkable about its newest program AlphaZero: The program developed its game playing expertise after only a few hours of playing games against itself. (The name Alpha Zero refers to the unsettling fact that it’s using zero human knowledge as input.) A new video released this month describes junking human data — millions of games played by human experts — for a new algorithm with a “much more elegant approach” which “stripped out all of the human knowledge, and just started completely from scratch.” Demis calls it an experiment in how little knowledge can be put into the systems, “and how quickly and how efficiently can they learn.”

So what happened? Chess Base notes after three days it was better at playing Go than DeepMind’s earlier Go program — which had trained for over a year — and while using fewer processors. And after 100 games, its winning record was 100-0, according to Professor David Silver, its lead researcher.

“People tend to assume that machine learning is all about big data and massive amounts of computation. But actually what we saw in AlphaGo Zero is that algorithms matter much more…”

And Go wasn’t the only game it tackled…

“Imagine this: you tell a computer system how the pieces move — nothing more,” explains the chess news site Chess Base. “Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly!” DeepMind’s program went up against Stockfish, the free and open source chess engine that’s stronger than the IBM program which had famously defeated human chess champion, Garry Kasparov, back in 1997.

Chess.com writes that “According to DeepMind, it took the new AlphaZero just four hours of training to surpass Stockfish; by nine hours it was far ahead of the world-champion engine.”

Or, as Hassabis puts it in DeepMind’s video, “AlphaZero could start in the morning playing completely randomly, and then by tea be superhuman level. By dinner it would be the strongest chess entity there’s ever been.”

In 1,000 games against Stockfish, AlphaZero lost only 6 — and won 155 (with the other 839 games ending in a stalemate).

But what’s fascinating is it’s not like watching a normal chess game, according to DeepMind researchers. “In several games, AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs.”

Grandmaster Robert Hess called the games “immensely complicated,” notes Chess.com. To study the games Hassabis even brought in two old friends from his days as a teenage chess champion — Grandmaster Matthew Sadler and Natasha Regan.

“When I first started looking through the games, I started thinking ‘Oh, that’s quite interesting, that’s quite interesting,’” remembers Sandler. “And there’s just a couple of games that went bang… It’s like this young kid from deepest Russia is sort of arriving and then suddenly beating everyone.

“It doesn’t have an engine-like style,” he tells Chess.com. “It plays like a human on fire.”

Regan was equally impressed. “What I found so interesting is because it taught itself, it might play the game in a completely different way from the way that we play it. It’s like a check on everything that we’ve taught ourselves since chess was devised, really. And it feels like it’s got a lot of potential to do other things.”

Grandmaster Peter Heine Nielsen tells them “After reading the paper but especially seeing the games I thought, well, I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.”

While Stockfish looks at 70 million positions every second, AlphaZero examines just 80,000 per second. Chessbase.com points out this is nearly 900 times slower, offering this explanation from the research team’s paper. “AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations — arguably a more ‘human-like’ approach to search…”

“In other words,” argues Chessbase, “instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think.”

“This is a game-changer…” writes Chessbase. “There is no other way of describing it…”

Beyond Chess
Chess.com also reports AlphaZero won a closed-door, 100-game match against Stockfish by winning 28 games and losing zero — with the other 72 games ending in a stalemate.

“What do you do if you are a thing that never tires and you just mastered a 1,400-year-old game? You conquer another one,” writes Chess.com, noting that after beating Stockfish, AlphaZero beat the world’s best computer at playing the game Shogi, a complicated Japanese variant of chess. Ironically, AlphaZero is kind of like Hassabis himself, who learned how to play chess at age four, “then beat his dad three weeks later.” But it’s important to note that the company has also expanded into more useful things like finding better ways for DNA researchers to fold protein.

“With a strongly interdisciplinary approach to our work, DeepMind has brought together experts from the fields of structural biology, physics, and machine learning to apply cutting-edge techniques to predict the 3D structure of a protein based solely on its genetic sequence,” explains the company’s website.

And they’re also touting a joint research partnership with Moorfields Eye Hospital showing that “our AI system can quickly interpret eye scans from routine clinical practice with unprecedented accuracy. It can correctly recommend how patients should be referred for treatment for over 50 sight-threatening eye diseases as accurately as world-leading expert doctors…. In the long term, we hope this will help doctors quickly prioritize patients who need urgent treatment — which could ultimately save sight.” They’ve optimized the system to provide “an easily interpretable representation” for clinicians to follow-up — and for compatibility with a wide variety of different eye scanners. “This initial research would need to be turned into a product and then undergo rigorous clinical trials and regulatory approval before being used in practice. But we’re confident that, in time, this system could transform the diagnosis, treatment and management of eye disease.”

Other DeepMind systems are already hard at work on problems like “learning how to use vastly less energy in Google’s data centers.”

So is he worried about a super-intelligent machine that conquers humanity?

The Guardian suggests Hassabis is more worried about confronting our future without the benefit of AI. “Ask yourself, if we didn’t have something like AI coming down the line how would we solve these problems? Either we are going to need an exponential improvement in human behavior, so we become more collaborative and less selfish and short-term, or we have got to have an exponential improvement in technology to solve the big problems we are creating for ourselves.

“I don’t see much evidence for the former.”


WebReduce

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.