With news of AI besting humans at complex games like Go and poker and developing the capacity to “dream,” “reason” and teach itself, it seems as if recent research into artificial intelligence is evolving at a breakneck pace. Language is no exception. In recently released findings, researchers at the non-profit lab OpenAI are now showing that AI may be capable of developing a language of its own, one that’s tied to experience and to their perception of the world, rather than spewing out something it’s been previously force-fed to them.
Artificially intelligent assistants like Siri already communicate with human users through the processing of natural languages (at least human ones), but as anyone who’s ever grappled with these digital assistants knows, even with mounds of data to train them, they still feel unnatural to communicate with.
The problem is, we aren’t really sure if machines really do understand the words they use, at least in the experiential sense. For example, when we use the word “sun,” we associate it with a visual image of a distant, flaming ball of hydrogen that lights up the skies. We may also mentally conjure up the memory and experience of the heat of sunlight, but is this what a machine understands when it uses the same word?
To tackle this problem, OpenAI researchers trained the bots to communicate in a specific way that encouraged the emergence of “grounded” and “compositional” language — grounded in the speaker’s experience of the real world, and compositional in the sense that speakers can string multiple words into a sentence to convey an idea or intent.
The researchers describe in a post how they placed artificially intelligent agents in a simple, two-dimensional world, assigning them simple tasks to complete. Using a process of trial and error within the training framework of reinforcement learning, the bots were let loose on their mission in this white-square world, after being asked to look at or move to a specified landmark within the white space or to get another agent to complete a task. Collaboration between the agents was encouraged, using a system of shared rewards that increased for each agent if they helped out a fellow agent. The AI bots’ emergent language was actually expressed in numbers, with the researchers having to label it in English words, but it’s nevertheless fascinating to watch this AI petri dish of sorts:
The agents can either move around or look at objects in their world, or communicate with the other bots around it. What’s most intriguing here is that the language for this communication was not pre-programmed into the bots by the researchers, but rather appears to have emerged by itself. The more agents that were deployed into the scene, the more complex the agents’ “sentences” became. By trying different ways of completing their tasks, each bot could keep track of their failures, successes and tally of rewards through their own recurrent neural network, which acts as a kind of memory storage.
During the experiment, the team ran into some challenges. One of the first was the AI’s tendency to create an indecipherable “Morse code language” that was non-compositional. Another hurdle involved the agents compressing the meaning of whole sentences into a single “word,” which may be practical but wouldn’t be a language easily interpreted by humans.
In each case, the team modified the system of rewards accordingly to encourage the bots to cooperate better and develop an understandable, compositional language. Even more surprising is that the team also observed forms of spontaneous “non-verbal communication” between the bots, such as pointing and guiding, when verbal communication was not possible.
The team’s next step is continuing exploring how we can develop machines capable of communicating in their own language that’s grounded in their perception of the world. To help make that emergent language comprehensible for humans, the team plans to work with researchers at University of California Berkeley, setting up these bots to communicate with English-speaking agents, as a way to translate this newfound AI language.
“We think that if we slowly increase the complexity of their environment and the range of actions the agents themselves are allowed to take, it’s possible they’ll create an expressive language which contains concepts beyond the basic verbs and nouns that evolved here,” said the researchers.
In a way, the experiment seems to imitate the human experience very well. Faced with many tasks needed for survival, humans had to learn how to cooperate and had to develop some kind of common language to make it possible. Now it appears machines evolving their own languages aren’t far behind.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real.