Evolutionary Game Theory Could Predict Dangerous AI

There isn’t a day that goes by without hearing about some fascinating development in artificial intelligence research, whether that might be an AI that can process and produce language in a human-like way, or an AI that can unlock the mysteries folded up within a protein, or automatically make scientific discoveries.
But in the headlong rush in the to find the next breakthrough, there are legitimate concerns that the competitive nature of the “AI race” might mean that things like safety and ethics are being inadvertently overlooked, resulting in phenomena like algorithmic bias, or an escalating AI arms race between rival military powers to build lethal autonomous weapons.
All of these recent developments point to a need for better regulations when it comes to engineering and implementing AI. Of course, too much regulation might stifle innovation, but too little might also bring what could have been a preventable disaster. As an international research team from Teesside University, Universidade Nova de Lisboa, and Université Libre de Bruxelles now suggest, AI can also be used to navigate this delicate balance by determining which types of AI research projects might need more regulation than others.
“Whether real or not, the belief in such a race for domain supremacy through AI, can make it real simply from its consequences,” wrote the team in a paper that was published in the Journal of Artificial Intelligence Research. “These consequences may be negative, as racing for technological supremacy creates a complex ecology of choices that could push stakeholders to underestimate or even ignore ethical and safety procedures.”
Using Evolutionary Game Theory
In order to find out which AI races should be prioritized for regulatory oversight, the researchers created an AI model that simulated various hypothetical scenarios for AI races. Because the team’s work had to simulate all the potentially complex choices when there is no single, predetermined path, the team’s model, therefore, integrated a variety of concepts gleaned from biology and mathematics, such as evolution, nonlinear dynamics, and game theory.
“The model itself was based on evolutionary game theory, which has been used in the past to understand how behaviors evolve on the scale of societies, people, or even our genes,” explained the team in a post. “The model assumes that winners in a particular game — in our case an AI race — take all the benefits, as biologists argue happens in evolution. By introducing regulations into our simulation — sanctioning unsafe behavior and rewarding safe behavior — we could then observe which regulations were successful in maximizing benefits, and which ended up stifling innovation.”
The team ran these simulations hundreds of times, modifying the variables in these experiments so that they could see what would happen as time went on. The model included a variety of virtual agents acting as competitors in these simulated AI races. Each virtual agent was randomly assigned behaviors that might occur in real-world situations, meaning that some virtual agents might be more cautious and careful, while others might tend toward taking more risks, ultimately rushing to produce AI products that are not properly tested, or that are susceptible to hacking and data leaks.
In particular, the researchers discovered that the most important variable was the “length of the [AI] race” — in other words, the time it took for a simulated race to achieve the goal of producing a functional AI product.
“When AI races reached their objective quickly, we found that competitors who we’d coded to always overlook safety precautions always won,” said the team. “In these quick AI races, or ‘AI sprints’, the competitive advantage is gained by being speedy, and those who pause to consider safety and ethics always lose out. It would make sense to regulate these ‘AI sprints’, so that the AI products they conclude with are safe and ethical.”
In contrast, the research team’s work suggests that longer-term AI initiatives — or “AI marathons” — likely would not require as much regulatory oversight, as these projects tended to prioritize safety and ethical concerns. In addition, the team says that regulating “AI marathons” might actually stifle innovation, and that regulations should be “smart, flexible” and tailored to the type of project, in order to prevent the emergence of unethical AI, while also encouraging the development of beneficial AI.
“Given these findings, it’ll be important for regulators to establish how long different AI races are likely to last, applying different regulations based on their expected timescales,” said the team. “Our findings suggest that one rule for all AI races — from sprints to marathons — will lead to some outcomes that are far from ideal. But such regulations may be urgent: our simulation suggests that those AI races that are due to end the soonest will be the most important to regulate.”