TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Operations

AI Algorithm with ‘Social Skills’ Cooperates Better Than Humans

Mar 22nd, 2018 11:00am by
Featued image for: AI Algorithm with ‘Social Skills’ Cooperates Better Than Humans

It’s looking quite likely that our future will be awash with artificially intelligent systems of all kinds: from driverless cars, robotic medical assistants, autonomous trading algorithms to collaborative robots working alongside humans in the office. The interesting question here is: how do we get all these various systems to cooperate with other intelligent systems — as well as humans — even though they might not have the same goals in mind?

So how might these disparate and potentially competing players work together? You build an algorithm for it. Or, as an international team of researchers has done, they’ve created an algorithm equipped with an artificial set of “social skills” that allows machines to collaborate with both other kinds of machines, as well as humans.

In their paper published in Nature Communications, the team described how they tested their S# (spoken as “S-sharp”) algorithm in interactions between humans and machines; between machines and machines and comparing that against human-human interactions in a series of two-player games. Running thousands of play-throughs of different types of games, the team evaluated how S# and other well-known algorithms fared in building beneficial relationships with a range of different partners.

Surprisingly, the team found that the machines incorporating the S# algorithm actually outperformed other algorithms, as well as their human counterparts in finding mutually beneficial compromises and in cultivating continued cooperation during gameplay.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” the paper’s lead author, computer science professor Jacob Crandall of Brigham Young University, told ScienceDaily. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”

Cheap Talk Works

One of the more interesting findings of the paper included how the use of colloquial phrases or “cheap talk” doubled the incidence of cooperation. Such “costless, non-binding signals” (as the researchers called it) has been shown to help establish cooperative relationships in repeated games, and in this case, did the same for machines dealing with humans. For instance, the interjection of phrases such as “Sweet. We are getting rich!” or “We can both do better than this” from a machine during its interaction with a human made it much easier to encourage long-term cooperation. Even the use of trash-talking phrases like “Curse you!”, “You will pay for that!” or an “In your face!” after an in-game betrayal increased overall cooperation.

Such a collaborative human-machine approach is a departure from the historical norm, where adversarial machines like Deep Blue and AlphaGo were designed to effectively pit themselves and win against humans in games such as chess or Go. But these are relatively narrow domains, and future broader applications requiring long-term cooperation would be difficult with such past models.

“Many scenarios in which AI must interact with people and other machines are neither zero-sum nor common-interest interactions,” wrote the team. “As such, AI must also have the ability to cooperate even in the midst of conflicting interests and threats of being exploited.”

General and Flexible

So what does it take for a cooperative machine algorithm to be successful? According to the research, it needs a general ability to perform well in a wide variety of scenarios. The algorithm would also need to be flexible so that it can forge beneficial relationships with humans and machines, without any prior knowledge of their tendencies. In addition, it would need to learn how to get distrustful entities to cooperate, while also deflecting any exploitative behavior from the other participants. It would be critical to learn, reason and adapt quickly in order to find mutually constructive solutions.

Technically, that’s a tall order to fulfill, but these findings bring us one step closer to the day that machines and humans can cooperate more seamlessly. “The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

But perhaps the most remarkable takeaway here is that machines can potentially cooperate much better with each other than humans ever could — and there may be a lesson there. “In society, relationships break down all the time,” Crandall noted. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

Images: Pixabay, Nature Communications.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.