Ben Gonshaw: Digital Media Theorist & Game Design Consultant | ||||||||
|
INTRODUCTION TO COMPLEXITY |
This page will look wrong in IE. Use FireFox for better viewing
EMERGENCE: Gaming's Saviour, or False Hope?
Introduction to Complexity Theory
Complex systems are made from very simple individuals that respond to their environment and to each other. Let's take a look at what it is that makes the emergence of such systems special and what conditions are necessary for it to happen. Then we can go on to see what different types of systems can emerge, they need not be simple: spontaneous group intelligence is more than plausible, it is a reality.
What is Emergence Emergence is a by-product of complex interactions. If enough simple things are together in a small enough space, their interactions cause the agents to spontaneously organise themselves into a system. Self-organisation is one of the key signs of complexity, but it only happens under specific conditions. If there are not enough agents taking part or if they are too far away from each other, then they will not interact often enough to produce anything interesting. As a whole the system stagnates and nothing happens. On the flipside, if there are too many agents packed into the space they will have a massive number of interactions with each other in a small space of time, causing chaos. However, there is a narrow margin in between these two extremes where the density of agents is just right resulting in a frequency of interaction that is neither too sparse nor too often. In this sweetspot the disparate agents suddenly come together in what appears to be an organised way. On this knife-edge on the brink of chaos, the agents spontaneously form a dynamic system of patterns.
An example of the two extremes, and the location of this magic zone where complexity emerges, could be with the molecules in water. Below zero degrees water is frozen. As water expands when frozen, the individual molecules are farther from each other than normal and there is little activity, resulting in uniformity. Above zero when the water is liquid, the molecules are too close, bumping each other, moving in all directions, and are chaotic in nature. However, at exactly zero degrees, the water is partly frozen and partly liquid. The molecules are at the correct density so that they can interact with each other to form small crystals. These crystals are not static like ice, these ones constantly change in shape, merging into one another and breaking apart. One moment a molecule is part of a structure, the next second they are free and then they are part of a different one. Ordered complex structures form and reform all the time in an hypnotic dance. In that extremely narrow temperature range, just before the chaos of liquid, emergent structures are clearly visible. Look here and here for more on phase transition.
If you were to examine the movements of water, watching the effects of crystal formations and break-ups you would be able to discern some pattern. With patient observation and sophisticated equations you could encapsulate the rules of that crystal symphony (the dynamics of the system) into a set of precise terms. However, unless you can observe cold water for yourself and take notes, you would not be able to derive those same equations just from knowing the possible bonding permutations of H2O molecules.
The intrigue does not stop there, because if you get enough systems in the same place at the correct density, even more complicated systems can emerge. Emergence can happen in layers. For example, it is the properties of hydrogen and oxygen that allow them to become fused in a molecule. The form of that molecule determines how it can attach to other water molecules. The density of those molecules as a function of pressure and temperature can give rise to complex ice structures. H2O molecules in a small quantity can allow one class of behaviours at a nano-scale, but these feed forward to create the waves associated with large scale Navier-Stokes type fluid dynamics. In other words, each layer in the scale has its own system, with its own unique set of rules.
This Wiki contains a good summary of the properties of an complex system.
On the Other Hand. . .
Emergence is not a magical occurrence and impossible to predict. Such an inference is based on the inability of our intellect to comprehend it, our maths to describe it and the speed of our supercomputers to interpret it.
The term emergent property is a short way of saying that the interactions are so complex that, although such behaviour is predicated for, it is impossible to foresee every permutation of the system.
Emergence and Adaptation Another startling property of complex systems is that very often the product of the system is adaptive. Without any extra rules and without needing a hierarchy of command, the system that is created from the individuals can change its behaviour as a whole to cope with new environmental stimuli. The classic example of this is with a flock of birds. No single bird knows what shape the flock should be. When there are enough birds flying together a flock happens and the individuals act as one bird. The flock flows around obstacles, moves towards prey and flees from predators as one single organism.
John Reynold’s aped flocking in his seminal Boids, program. From just three basic rules, his boids formed into flocks that behaved as real flocks would. That same principal operates in swarms of flies, herds of wildebeest and shoals of fish. Watch Finding Nemo for some great virtual shoaling, and The Lion King for the cattle variety. Even Peter Jackson harnessed it with MASSIVE, the engine behind the battles in Lord of the Rings.
This is where things become a boon for programmers. In theory it is much harder to encapsulate the rules of the flock and cover all eventualities, than it is to code the rules for one boid and let the flock take care of itself. Top-down code (for an entire flock) is bound to miss a scenario and result in the flock doing the wrong thing. Program the flock bottom-up, by creating many instances of the same bird, and your flock will adapt to remain a flock in every situation (aside from too many birds dying: then there won't be a flock any more!).
A more detailed summary of adaption can be found here
Emergence, Life & Intelligence
Just as single celled organisms were the first to exist in nature, so the same is true in artificial life. John Conway created the Game of Life (click for java version), a bacteria based program that can create some interesting colonisation patterns. The principal behind his creation is known as cellular automata.
If a collection of on/off cells representing bacteria can produce interesting patterns then what about the potential for a group of living creatures? A large enough group can go beyond being simply adaptive and responding to its environment, but it can have goal oriented intelligence.
An example of emergent distributed intelligence could be termites. These simple insects cooperate to build enormous towers and have a complex social structure. Put too few termites in a vivarium and their behaviour is largely unconnected. Place too many and the results are erratic, but with just the right density, the complexity of the hive emerges.
In the density sweetspot, an individual's responses to their environment and to other termites creates the behaviour necessary to make these huge constructions. Not only is the phsyical hive structure complex, but the termites respond intelligently to the world around them as a collective. A lone insect is far from clever, having just a few neurons. However, a whole hive of termites has many neurons between it. In fact, the combined processing power of the organised system is in the order of a human brain. This means that as a group they have some serious computational power to make decisions
In just one scenario such as an attack by other insects, the hive serves soldiers to the predators, moves eggs away from attackers and protects the queen. This is an ordered response to one external pressure, but the hive can also respond to changes in temperature, the location and type of food sources and the number of their own members.
In these cases there is a collaborative level of intelligence where the decision making is an emergent property of the collection of individuals. Christopher Langton made a program with one ant in it, showing how a basic rule can create the complex foraging path of an ant. Here is a program that simulates ant foraging, and a paper on a nest’s organisation. Some experiments have been done to simulate whole ant nests. MIT actually built many small robots embedded with sensors and ant rules that behaved collectively as an intelligent agent. Current projects involve using arrays of many agents to act as one larger intelligent body. Proposed projects see the swarms being used for the exploration of hostile environments, such as Mars.
Concept Ad Absurdum
Given the nature of higher order systems, if one can argue that the termite nest as a whole is intelligent, then it may be permissible to say that the interactions and organisation between the proteins of ‘primordial soup’ created a distributed collaborative entity. This loosely associated being later came to be encapsulated in single celled organisms, whose community represented a larger, more complex distributed organism which was encoded into multi-celled organisms and so on until we humans arrived. If the pattern continues, there is potential for a single-entity incarnation of mankind.
Emergence and Computation In any field you will always find reductionists. If you break down hive intelligence there must be some type of number crunching going on: taking the inputs from many termites, working out what is going on and sending out a response. Out of emergent intelligence comes the concept of Emergent computation. This is a fascinating field both discovered and recreated by the complexity theorists. This is the idea that you can take simple components and create an emergent system that is computationally complete.
Computationally complete means that the system can process any computable function, based on the ability to do a few key basic operators, such as AND, NOT and NOR. This means that in theory the system could compute exactly the same things as the machine you're reading this on. Have a look at this version of The Game of Life where cellular automata have been leveraged for computation. This holds a lot of wonder for theorists and potential for programmers. Don't be misled though, just because a computationally complete system could compute everything, doesn't mean that the system is actually computing anything at all, so don't imagine the Earth to be as Douglas Adams imagined it to be, as enticing as that might be. However, just because that may not be the case without any intervention, if we put out minds to it we could harness such phenomenon for computation, such as Terry Pratchett's imaginary Ant Computer in his book, Interesting Times.
Summary A density of simple agents that can take inputs from their environment can beget extremely complex and intelligent behaviours under the right conditions. Each layer of complexity has its own rules, each one is a unique and separate system, whose behaviour cannot be predicted based on the rules of the system beneath it.
The possible applications of this phenomenon are staggeringly vast, from self-assembling drug compounds, to coordinated search and rescue robots and massively parallel computers.
A good jumping off point for more info is Cosma R. Shalizi’s ramblings. Have a good poke around in there and you’ll find a host of information about complexity and its uses.
INTRODUCTION TO COMPLEXITY |
©2004-5 Ben Gonshaw All Images copyright of their respective holder, including (but not limited to) Sammy/SNK, Capcom, Marvel | About Me CV |