Why Can’t AI Beat Humans at Angry Birds?

For seven years, AI researchers have been struggling with an unusual challenge: shooting cartoon birds at cartoon pigs. An annual competition tests their ability to craft an AI agent that can play the popular video game Angry Birds.
And then the best AI agents are pitted against human competitors…
This month two researchers posted a paper on arXiv.org describing their journey, and what they’d learned along the way. It’s an example of the kind of weird obstacles that all AI researchers face as they attempt to adapt cutting-edge technologies to some very human endeavors. Teams around the world are tackling much more sophisticated problems, persevering to overcome the obstacles on the path to our shiny technology-enhanced future. But in a world where we’re asked to trust software that drives us down highways, can we even get it to successfully play a video game?
The certificate has arrived! #aibirds #gameai #ai #gamedev #angrybirds pic.twitter.com/5Z5IDjkk2t
— Andrea Tucci (@AndreaTux) August 14, 2015
It’s trickier than it looks. One of the paper’s authors, Ekaterina Nikonova, currently a PhD candidate at the Australian National University, tells me that in chess, for example, there’s a much smaller number of choices on every turn — and the outcome is knowable in advance, making it easier to plan ahead. The same thing is true for Go — but the cartoon worlds of Angry Birds are far less predictable.
So Nikonova teamed up with a researcher on another continent — Jakub Gemrot, a lecturer on game development in the Czech Republic at Charles University in Prague. Together they lined up some funding from the Czech Science Foundation and then spent the next six months collaborating. Their goal was to participate in the annual “AIBirds” competition, in which competitors create their own autonomous Angry Birds-playing agents using Java, C, C++ or Python. (For beginners they even provide a basic Java game-playing framework to help participants get started.) In their paper, the two intrepid researchers describe how they carefully represented the state of the playing field and the available actions — and then incorporated a system of rewards, based on the scores achieved.
Slingshot Effects
Then they applied the tried-and-true tactic of deep reinforcement learning, using an architecture based on Google’s DeepMind Deep Q-network, which had achieved some notoriety for its use in experiments with several Atari games. The actions are a number between one and 90, representing the angle at which the angry bird gets launched. Yes, that’s 90 possible combinations to test, but as the paper points out, “Even one degree can make a huge difference.” They’d actually observed shots at 49 degrees delivering mediocre results while shots at 50 degrees delivered a new high score. At one point the experimenters had tried adjusting that angle in larger two-degree increments, according to their paper, but it “dramatically reduced the overall agent’s performance.”

The calculation that aims the birds at the pigs
They also had to calculate a release point after the slingshot’s rubber band had been pulled back, since that also affects the cartoon bird’s trajectory. And they avoided altogether the complicated levels in which a white bird drops a cartoon bomb, which requires a separate tap (and thus an entirely different kind of calculation). The hardest part turned out to be simply finding enough training data, since the competition confronts AI agents with entirely new levels that they’ve never seen before. The paper notes that the competition levels are specifically designed to eliminate AI agents that are solving levels using “brute force,” favoring instead those that have come up with some kind of artificial logic for choosing their targets.
In the course of their research, they’d collected over 115,000 screenshots of their AI playing the first 21 levels of the original Angry Birds games. And the results looked promising. On level 14 their agent was even able to consistently achieve a higher score than had ever been achieved by the two top AI agents.
“At first we thought it was a mistake,” their paper reports, “or a lucky outcome….” With high hopes, they then tested their AI against some human volunteers with varying levels of experience playing Angry Birds. The paper notes the AI has to compete against human players, “for whom reasoning and planning in the complex physical 2D world appear to be easy.” They compared the grand total of their agent’s scores on all 21 levels to those of its human opponents. The results?
In three out of four cases, “it lost to humanity…”
It must’ve been a big disappointment. Remembering the experience later, Nikonova tells me that at least their agent lost to the humans by “only a relatively small amount of points.” Even on level 14, where the agent had achieved a score higher than the top AI’s had ever achieved before — it still lost to a human player. The paper ends by optimistically noting that there are still ways they could try to improve their performance.
Then they took their AI to the 2018 competition.
Nikonova remembers when the big day finally arrived. “It was a very nervous moment for us, to see our agent playing live against other agents and on new game levels that were designed to be particularly difficult for the AI.” It’s a world where the AI competitors have daunting names like BamBirds and AngryHex, they unveiled their own creation — “DQ-Birds.” Ultimately their AI agent placed sixth in the quarter-finals round. “Despite the fact that our agent was able to master 21 levels from a training set and was able to solve previously unseen levels of a greater difficulty from a validation set, it still had a problem to solve all eight levels during the competition,” their paper reports.
It might’ve helped to have more levels to train on, but they weren’t able to get their hands on an official Angry Birds level generator. Their paper also notes that another team had used a clone of the Angry Birds game to generate over 100 levels for training — only to discover that its physics engine was slightly different than the one used in Angry Birds, leading them to a ninth-place finish in the quarter-finals round.
Their paper presents its definitive conclusion. “The Angry Birds game still remains a difficult task for artificially intelligent agents.” But Nikonova still describes the whole six months as “a really interesting and fun experience.”
But that was 2018. So what happened in this year’s competition?
@matthew_stephe presenting results for the AIBirds competition! #CoG2019 pic.twitter.com/ygxcEoQJK9
— IEEE Conference on Games (@ieee_cog) August 21, 2019
The big showdown came in August as one of the events during the International Joint Conferences on Artificial Intelligence in Macao, China. “This year, there was a noticeable jump in performance of the participating AI agents,” explains a post on the competition’s official site. They joked on another page that “This might be the last chance to beat AI and to become quite possibly the last human to win this challenge.”
And the two finalists “showed a very convincing performance with some remarkable shots.”
The blog post jokes that the AI faces a tough challenge because the AI researchers “are by now very experienced at playing Angry Birds.” On the four levels, the top scorer was Nathan Sturtevant from the University of Alberta, with a total of 228,270 points.
Humanity’s Last Stand
According to the blog from the game, of all the teams, the BamBirds AI executed “the best shot of the whole competition. It was incredible and enabled a solution of the level with only three birds.” Every other player who’d solved the level required an extra bird. But then, to use a human term, the AI choked. “Only two more simple and direct shots were required to finish the level that even a beginner would now be able to complete. But BamBirds failed. They fired their remaining birds at some imaginary pigs. and didn’t solve the level.”
“We couldn’t believe it… It was as if the agents were nervous to play in front of the large crowd of spectators we had.”
In fact, the blog post calls the performance of the game-playing agents “a disaster.” Faced with four entirely new levels created specifically for the competition, only one of the top four AI was to complete any of the levels — solving exactly one level. “What a disappointment after an amazing competition.”
Their conclusion? “Humans still beat AI at Angry Birds and it seems AI is not getting any closer. AI still has a long way to go to master this very hard problem that is much closer to real-world problems than seemingly difficult games like Chess or Go. ”
Nikonova tells me that the AI are improving every year, and “I strongly believe that AI will outperform human players in Angry Birds in a very near future. And I am not alone.” She cites a paper in the “Journal of Artificial Intelligence Research,” which reported the results of a large survey of machine learning researchers. “Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years,” the paper reports, “and of automating all human jobs in 120 years.” They also predict that AI will write a best-selling book by the year 2049, and will outperform humans in surgery by 2053 — while besting humans in truck-driving abilities by the year 2027. There’s even a table where they predict when AI will beat humans at various tasks.
And yes, there’s one task that’s expected to happen next, sooner than all the others. Beating humans at Angry Birds.
But it hasn’t happened yet.
WebReduce
- TED conferences release an educational sci-fi cartoon to teach students to “Think Like a Coder.”
- 64 Dallas schools introduce a new extracurricular program: competitive video gaming.
- In this year’s JS13K competition, developers again tried to write tiny JavaScript games under 13K.
- Archaeologist wants to scan planet earth with a laser to preserve a detailed open-source 3D map.
- How laser scans found 27 lost Maya temples and prehistoric sites in Scotland
- Bell Labs will reunite UNIX pioneers on Tuesday for a 50th-anniversary event.
Feature image by M. Maggs from Pixabay.