Identifying Emergent Behaviors of Complex Systems — In Nature and Computers
Complex systems are found not only in the ant colonies and bee hives but also in the human-made systems like transportation networks and online communities. So identifying unexpected, emergent behaviors arising out of complex systems is important — whether in nature or a distributed computing system — because it paves the way for better adaptation and overall resilience, so argued data scientist Jane Adams of Two Sigma. Adams delivered a keynote talk about complex systems at the USENIX LISA conference in Boston last December.
From imitating swarm intelligence to artificial synapses engineered to compute like the human brain, many of our technological leaps are inspired by behaviors and systems created in nature. Complex systems are particularly fascinating, as they are all around us, and can affect our lives in unforeseen ways. One pertinent example is the earth’s global climate: It is a conglomeration of interdependent sub-systems, and once one component is thrown out of whack, the overall system is affected, whether manifesting as prolonged drought in one place or as excessive flooding in another.
So what exactly is emergence in complex systems? Adams, who researches emergence in complex systems, first explains that complex systems are composed of many individual component parts that interact with each other and with their local environment.
Simple Ants, Complex Colony
Adams uses ants as an analogy: Ants are relatively simple components in the complex system of the ant colony. Or more specifically, each ant component’s behavior is relatively simple compared to what the overall system is doing. An ant colony as a whole is capable of engaging in complex behaviors like building nests, foraging for food, raising aphid “livestock,” waging war with other colonies and burying their dead. In contrast, no one single ant will have the impulse or knowledge to undertake such collective tasks on its own. It’s these collective behaviors that arise unexpectedly that are called ’emergent’ behaviors.
Once it reaches that critical state, the system seems to “flip a switch” and become resilient to future disruptions.
Adams notes that ultimately the “specifics [of behaviors of individual components] are irrelevant” — one can still describe the characteristics of the whole system without being hampered by the details of the individual ant.
On the other hand, it’s also difficult to predict how a complex system will evolve because it will require an “irreducibly large” computation. Adams invokes the research of computer scientist and physicist Stephen Wolfram here, and his principle of computational irreducibility, which states that it is impossible to predict what a complex system will do, except by going through as many steps in the computation as the evolution of the system itself. In other words, there’s no way around the problem except by running the program itself.
This is why it’s much more efficient to describe a complex system as a phenomenon in its own right, rather than regarding its individual components. This certainly makes research much easier, but this resistance to simplification is also a fundamental feature of complex systems.
“Collective behavior is irreducible to individual behavior,” emphasized Adams. Thus, we know it’s not complexity we see if the system does not have features that are not shared and can be described by the features of its component parts.
Emergence vs. Complexity
Emergence and complexity are also different. “Complexity describes the behavior — it captures the available information, sensory capabilities, interaction dynamics and the range of possible actions a system can take,” explained Adams. “Complexity captures these degrees of freedom and the information available, [while] emergent phenomena are the actual behaviors, the occurrence or the appearance of those behaviors.”
So emergence seems to happen when the system has evolved to some critical point. “In self-organized systems, critical states act as a kind of attractor,” said Adams. Once it reaches that critical state, the system seems to “flip a switch” and become resilient to future disruptions — the same disruptions that drove them to criticality in the first place. A collective then emerges, whose behavior as a whole is no longer correlated to the behavior of individual components. In this way, the system maintains its decentralized character, yet can act as a single entity. Thus, in tying the concepts back to computational systems, the expression of any algorithms of these individual components must necessarily be simple, distributed and scalable.
While we have many biological analogs of computational problems, Adams cautions that one cannot apply the solutions from biological systems to computers, or even more abstract problems like artificial intelligence, without a full understanding of the environmental pressures that prompted those biological solutions in the first place.
“There’s also a problem of representation,” notes Adams, as accurately specifying the relevant aspects and their components and how they interact in a model can be a huge challenge in itself.
So while any system may begin with a simple set of components, under the right conditions, it will nevertheless be enough to generate a diverse range of differently-scaled systems, whether in nature or computing. “And that’s difficult, from an operational perspective, to handle,” said Adams. “But simplicity and abstraction is something we should strive for in our software and our systems.”