Social Intelligence Is the Next Big Step for AI
What can’t artificial intelligence do nowadays? With the emergence of large language models (LLMs) like ChatGPT, and generative AI models that make eerily good-looking art, it seems that the capabilities of AI in some areas have become almost human-like.
Yet, AI isn’t completely infaillible; we’ve seen cases where LLMs have failed spectacularly in instances where they supposed to provide healthcare resources, and generative AI models still have trouble accurately producing images of human hands and teeth (with sometimes hilarious results).
But as some experts believe, the next big step is developing AI’s social intelligence, which will help it to improve how it interacts with humans. When it comes to being able to decipher nonverbal cues like body language or facial expressions, AI still lacks many of the social skills that many of us humans take for granted. To help AI develop those social skills, new work from Chinese researchers suggests that a multidisciplinary approach will be needed — such as adapting what we know about cognitive science, and using computational modeling would help us better identify the disparities between the social intelligence of machine learning models and their human counterparts.
“[Artificial social intelligence or ASI] is distinct and challenging compared to our physical understanding of the work; it is highly context-dependent,” said first author Lifeng Fan of the Beijing Institute for General Artificial Intelligence (BIGAI) in a statement. “Here, context could be as large as culture and common sense, or as little as two friends’ shared experience. This unique challenge prohibits standard algorithms from tackling ASI problems in real-world environments, which are frequently complex, ambiguous, dynamic, stochastic, partially observable and multi-agent.”
Multidisciplinary Approach Needed
While physical intelligence is relatively easy to measure — such as learning how to throw a ball or solve an abstract problem — the team’s work points out that much of the knowledge relevant to expanding artificial social intelligence is found in subfields that are studied in their own silos.
“In contrast to the mechanical and abstract nature of physical intelligence, ASI involves many subfields that are currently studied separately, such as social perception, theory of mind (ToM), and social interaction, with varying emphasis on perception, cognitive components, behavior, and even psychometric methods to measure social skills,” wrote the team.
However, Fan believes that the unknown factors surrounding artificial social intelligence might be better unraveled by using a more comprehensive strategy that focuses on what are currently considered the main features of social intelligence.
“Multidisciplinary research informs and inspires the study of ASI: studying human social intelligence provides insight into the foundation, curriculum, points of comparison and benchmarks required to develop ASI with human-like characteristics,” said Fan. “We concentrate on the three most important and inextricably linked aspects of social intelligence: social perception, Theory of Mind and social interaction, because they are grounded in well-established cognitive science theories and are readily available tools for developing computational models in these areas.”
In the field of cognitive science, social perception refers to the study of how people form impressions or make inferences about oneself, other individuals and groups, by utilizing social cues to evaluate social roles, rules and relationships. Social perception forms the basis for theory of mind, which describes a person’s capacity to understand other people by attributing mental states to them — thus allowing one to take an informed judgment about what the other person might be feeling or thinking. Most experts believe that developing both social perception and theory of mind in tandem are crucial aspects for successful social interactions.
According to the team, such social instincts might actually be hardwired in humans. For instance, as the team describes in one related study, human participants were able to quickly make social judgments about varying arrangements of moving shapes shown on a display.
“These mental states combine to form a narrative-like description of the display, such as a hero rescuing a victim from a bully. This interpretation of simple moving shapes as animated agents is a remarkable demonstration of how the human visual system can infer complex social relationships and mental states from simple motion cues with minimal visual characteristics. Even though they involve impressions typically associated with higher-level cognitive processing, such interpretations appear to be predominately perceptual in nature, i.e., relatively rapid, automatic, irresistible, and highly stimulus-driven.”
Generally, such complex social judgments are still difficult for machines to make, though there is some prior work suggesting that if AI was indeed able to master such skills, it would likely do just as well as humans, especially in situations where compromise and mutual cooperation is required. Nevertheless, the team’s work underscores how critical lessons from neuroscience and cognitive science may be in eventually developing emotionally intelligent machines that can accurately judge how humans think and feel.
“To accelerate the future progress of ASI, we recommend taking a more holistic approach just as humans do, to utilize different learning methods such as lifelong learning, multitask learning, one- and few-shot learning, meta-learning, etc.,” said Fan. “We need to define new problems, create new environments and datasets, set up new evaluation protocols, and build new computational models. The ultimate goal is to equip AI with high-level ASI, and lift human well-being with the help of artificial social intelligence.”