Sometimes artificial intelligence (AI) can fail — and spectacularly so. Sometimes it’s in situations where human lives are at risk — as in the case of self-driving cars — and that’s why it’s critical to ensure that AI is sufficiently trained so that it can respond properly in complex, real-world scenarios.
To do this, experts often use simulations to put their AI models to the test, to allow them to learn within a virtual environment that can more or less approximate what might actually happen in the real world. But designing a suitable learning environment to train AI isn’t as easy as it seems — if a simulated environment is too simple and predictable, then the AI doesn’t learn much; make it too complex, and then the AI will require massive amounts of computational resources in order to operate in such a context.
So what strikes a decent balance between simplicity and complexity when it comes to constructing an appropriate virtual learning environment for AI? Apparently it can be found in soccer (or as it’s known in the rest of the world, football). At least, that’s what experts over at UK-based AI research lab DeepMind have determined. After having developed AI that’s achieved superhuman mastery in traditional games like Go and chess, as well as more straightforward diversions such as Pong, and more strategic, real-time multiplayer games like Starcraft II, this Google subsidiary is now turning to soccer game simulations as a way to train AI in a challenging, multiplayer environment that nevertheless has a certain level of predictability. Watch the AI play:
As they outline in their paper, the team developed their own soccer simulation, dubbed “Google Research Football Environment.” In particular, the research focuses on reinforcement learning (RL) in particular — a type of machine learning technique that trains AI agents using a dynamic system of reward and penalties.
“Modeled after popular football video games, the Football Environment provides a physics-based 3D football simulation where agents control either one or all football players on their team, learn how to pass between them, and manage to overcome their opponent’s defense in order to score goals,” writes the team. “The Football Environment provides several crucial components: a highly-optimized game engine, a demanding set of research problems called Football Benchmarks, as well as the Football Academy, a set of progressively harder RL scenarios.”
The team’s simulation provides scenarios of differing complexities, with the AI agent learning to perform tasks like running, passing, shooting and scoring goals, and handling typical soccer game rules like fouls, cards and penalty kicks, in addition to devising successful team strategies. The simulation is designed so that the agent can either play against itself, or other machines or humans.
The team’s aim is to address a number of issues with existing RL environments, where some are too easily solved by state-of-the-art algorithms. For instance, some virtual environments might be too structured and deterministic, making it too easy for AI agents to predict what might occur, and therefore not random enough to reflect the ever-changing dynamics that are present in the real world. On the other hand, more complex environments would need more computational resources than is typically available to the average researcher — thus, the team has purposely designed their system to run on off-the-shelf machines, under an open-source license. In addition, rather than being a single-player environment, the simulation emphasizes interactions between multiple players and agents, where they can either compete or collaborate, resulting in challenges that better reflect real-world situations.
The team’s preliminary experiments show that Football Environment will potentially offer researchers a lot of flexibility when it comes to training their AI agents in a dynamic and relatively complex learning environment. And if previous indicators in chess or Go are anything to go by, it’s likely that AI will come up with innovative, never-before-seen strategies that will surprise us all.
Images: Jannes Glas on Unsplash; Google DeepMind.