Tuesday, November 25, 2008

Progression of Intelligent Processes

The purpose of this article is to discuss the possible roots of intelligence in the universe. As with most fuzzy concepts like "intelligence," we must begin by producing several basic formal definitions which we can then use as tools to build more powerful concepts. We attempt to use these definitions to classify intelligent processes and their advancements over time. The result of this thought process is the idea that an intelligent process tends to produce other intelligent processes with goals that also fulfill the goals of the original intelligent process.

Definitions

System: any collection of interacting components.

State: a complete description of all components of a system.

Process: a sequence of changes to the state a system.

What's the difference between a "random" and "non-random" process? It depends on the level of detail of the analysis. If we analyze a complex process operating on a large system with a crude level of detail, representing it with too few variables, then we lose the ability to model all predictable effects. Thus, we must quantify the results with a certain degree of uncertainty/randomness simply because our model lacks enough detail. However, given infinite resources used to analyze processes, any process becomes non-random.

Stability: a property of the state of a system, proportional to its ability to remain relatively unchanged over time when perturbed only by a completely random process.

A process can change a system from a stable state to an unstable one, or vice versa. It can move a system through a trajectory of stable states. It can even, in a way, give birth to other processes: it can lead a system to a state where other "child" processes are able to act on it as well.

Goal: a target state of a system.

Intelligence: a property of a process, proportional to its ability to move a system closer to a pre-defined goal state.

Goals are defined by outside observers. For any particular analysis of any particular process, we can define whatever goals we want and determine the degree of intelligence present. Thus, intelligence is in a sense arbitrarily defined, but it can be a useful measurement as long as the goals are well-specified. We can say that a process has zero intelligence if: 1) it has no goals defined, or 2) it is completely unable to move a system towards a defined goal.

Now let's take a step back and think about our universe as a whole. The universe is defined by its initial conditions and its fundamental forces: the initial conditions specify the initial state of all systems, and the fundamental forces constrain the possible processes that can act on those systems. Interestingly, there seems to be a universal process which tends to favor systems in stable states. Imagine the state of any system as a point on a surface (or landscape), where the lowest points are more stable than the peaks. This universal process forces all systems down the slopes of their stability surfaces towards the (locally) minimum points. If the minimum is bowl-shaped, the system will stop changing when it reaches that minimum. (The minimum might be a valley, though, so there can be room for the system to wander around its state space within the valley and remain stable. This is a key requirement for a process to birth another process: when the parent process succeeds in reaching the minimum, the child process can begin to explore the valley.) Systems might go through unstable states temporarily, but they will tend towards the most stable. So what is this universal process which favors stability?

Evolution: a process which tends to change the state of a system to increase its stability, i.e. an intelligent process whose pre-defined goal state is the one with maximum stability.

The universal process described above seems to select the most stable states, allowing them to last, while discarding the unstable ones. We can view it as a type of evolution, "physical evolution," the first intelligent process within the universe.

Brain: a system which supports a specific variety of intelligent process. This process represents a transformation from input space (sensory) to output space (motor). Brains are produced by some other intelligent process which specifies their goal states.

Note that evolutionary and brain-based intelligence are processes, and remember that processes can give birth to other processes. In general, processes with a high degree of intelligence tend to lead to the existence of other intelligent processes. A single parent intelligence can even produce an entire family tree of intelligent processes. Furthermore, the child processes tend to be given goal states from a subset of the parent's goal states. A child process whose goals violate those of the parent will not last.

Progression

The following list describes the progression of intelligent systems in the universe. Each stage represents the result of some combination of events which produces a kind of intelligence phase change. There can be multiple intelligent processes at work simultaneously (e.g., the original physical evolution, its child "genetic evolution,", etc.). We intermix the more significant advances of each process as they appear. The list is, of course, not exhaustive; it represents a very small sample of all the complex processes in existence. Since we must choose a small subset, we choose to focus on those events that are most interesting to humans, i.e. those involved in the generation of our own existence.

Note that the goals of each child intelligent process cannot oppose the goals of the parent process without being destroyed. Also note that a key ingredient to many forms of intelligence, including evolutionary processes and more advanced brain-based intelligence, is the random exposure to new situations which enables trial-and-error-based decision making.

Physical Evolution Level 0: Initial state of the universe
The "null" stage. The universe is in its initial state, determined by some unknown process, waiting for the fundamental forces to begin acting.

Physical Evolution Level 1: Clusters
As soon as the fundamental universal forces start acting, physical evolution appears. Gravity produces clusters of particles from the initial state of matter. Larger clusters have more pulling force than smaller ones; the large get larger, and the small get sucked into the larger ones. Eventually physical evolution produces its first result in achieving its goal of stability: the universe becomes a stable collection of clusters of matter separated by empty space.

Physical Evolution Level 2: Stable molecules
Once the cosmic-scale events have settled down, interesting things begin happening at the microscopic level. Atoms are constantly colliding, "trying out" new ideas for molecules. The molecules that last longer are more stable; if they stay around just a little bit longer than others, they will become more common. So we begin to see physical evolution performing a type of selection process on molecules. Those that are most stable proliferate, and the unstable ones disappear. Each stable molecule flourishes in a certain habitat. There can be multiple stable "species" of molecules that coexist, possibly with symbiotic relationships.

Physical Evolution Level 3: Stable structures
Now that there are stable molecules available, physical evolution can operate on combinations of molecules. Random collisions of molecules produce all kinds of physical structures, some stable, and some not. Physical evolution again selects the more stable structures to proliferate, resulting in a new kind of battle for survival.

Physical Evolution Level 4: The cell wall
A molecular structure is produced that acts as a shield against bombardment: the lipid bilayer. Any collection of molecules with one of these protective shells (i.e. a crude "cell wall") gains a massive advantage over others in terms of mean lifespan. Their existence is still relatively short but much longer than before. In certain hospitable environments, these stable "cells" become common.

Physical Evolution Level 5: Birth of genetic evolution
With the protective cell wall in place, physical evolution can begin to experiment with various modifications to the internal cell structures. As before, any changes that increase the lifespan of the cell produce more long-term stability of that design. The game-changing event at this stage is the appearance of intra-cellular structures which support information-based representations of physical structures (primitive DNA). Physical evolution has given birth to a new intelligent process: genetic evolution.

Genetic Evolution Level 0: Gene expression
The presence of some molecule results in the production of some corresponding physical structure. This is the essence of gene expression. Any cell containing molecule X produces structure Y. Such a procedure is enabled by a combination of structures that acts as a gene expression machine. The details of the actual transformation (from X to Y) are unimportant as long as X reliably produces Y. Cell stability is still fundamentally tied to its physical properties, but with this gene expression machine in place, it is now indirectly tied to the presence of certain genetic molecules. Long-term survival under these new rules depends on having the right combination of these "genes." If having gene X implies having structure Y, and structure Y is related to a stronger cell wall, better repair mechanism, faster acquisition of resources, etc., then cells with gene X will become more common. Essentially, this new intelligent process operates in gene space, which is just a proxy for physical space. Genetic mutations, random events that modify a cell's genetic material, are an essential part of exploring this new gene space. The goals of the new genetic evolution intelligence (proliferation of stable genes) still fall within the constraints of its parent's goals (proliferation of stable structures).

Genetic Evolution Level 1: Cell replication
When cells develop the machinery to copy their own genetic information, they become able to copy their physical structures as well. This produces an explosion in the speed at which genetic evolution can operate. The "fitness" (relative long-term stability) of a given genetic solution is multiplied by the number of instances of that solution: a large population of short-lived physical entities is now more stable than a single long-lived non-replicating entity. The feedback loop (successful genes produces more numerous, stable physical populations, which generate more copies of those genes, and so on) means that systems with replication quickly outnumber non-replicating systems.

Genetic Evolution Level 2: Motility
Moving around increases the probability of acquiring resources, resulting in an increased ability to build and repair structural elements. Motility is possible even without feedback-based control: simply moving around quickly without any particular target is much better than sitting still. One possible form of early motility is enabled by a basic type of short-term memory: the charging/discharging cycles of cell depolarization, tied to some physical deformation of the cell, produces a variety of repetitive motions, some of which tend to move the cell around in its environment.

Genetic Evolution Level 3: Brains
Targeted acquisition of resources based on sensation is a more advanced form of motility which usurps simple blind movement. When external events are allowed to affect cell depolarization (which drives motility), a feedback loop is present in the system. This presents a new domain for genetic evolution: the control system between sensation and action - the brain. Brains are defined by genes, just like all other structures in the system, so genetic mutations changes can can act upon the parameters of these controllers. Genetic changes are favored that improve the control system in a way that results in more copies of those genes. We consider the transformation process (inputs to outputs) performed by the brain as a child intelligent process of its parent, genetic evolution. The initial brain's implicit goals involve acquiring energy and materials, but can potentially involve anything needed by genetic evolution. Any changes in the brain's parameters are constrained to help achieve the goals of genetic evolution.

Brain Level 0: Simple feedback control
The simplest brains are basic feedback control mechanisms (e.g., P/PD/PID controllers) which transform some sensory input signal into an output control signal. Initial possible "behaviors" for entities evolved in different environments include chemotaxis (chemical gradients), thermotaxis (temperature gradients), phototaxis (light gradients), etc. These feedback-based behaviors provide much more resources for the entity, increasing its long-term stability over those without such skills.

Genetic Evolution Level 4: Sexual reproduction
Targeted motility enables sexual reproduction. The success of a gene may depend upon the presence of other genes. In general, more complex structures must be represented by larger chunks of genetic material. The evolution of an entity's genetic material is a slow process when based on asexual reproduction and mutations alone; furthermore, the emergence of complex structures from mutations of an individual genome is fairly improbable. However, the emergence of genetic crossover, the ability of two physical entities to exchange chunks of genetic information, dramatically increases the probability of producing more complex structures. This procedure represents a wormhole through gene space through which information can warp. The result is that genetic material present in two separate entities, which might produce simple structures in isolation, can combine synergistically to produce much more complex structures in the offspring entities. Genes are now favored that optimize the control system towards more sexual reproduction, e.g., producing an implicit goal in the physical entities of maximizing intercourse.

Genetic Evolution Level 5: Multi-celled entities
A collection of cells that functions together in close proximity provides a benefit to each cell involved. By "cooperating" in a sense, many cells can share the overhead costs of staying alive, amounting to a type of microscopic trade agreement. While sharing the burdens of life with nearby cells, they can start to specialize into different roles, increasing the variety of macro-scale multi-celled structures.

Brain Level 1: Discrete switching control
With the appearance of multi-celled entities, the brain can now consist of a network of specialized neural cells which communicate via synaptic transmission. Each neural cell represents a nonlinear transformation of inputs to outputs, and the collective activity of the neural network can be viewed as a dynamical system. Such dynamical systems can have many stable states. Thus, instead of using a single feedback-based controller, the brain has now evolved multiple discrete control systems ("reflexes"). Each one is used in a different situation (e.g., feeding, swimming, mating, fleeing). The "decision" of when to switch is still solely influenced by the current (or very recent) sensory information; when the entity is exposed to a certain situation (i.e. a certain pattern of sensory inputs), its brain switches to a different stable feedback loop (attractor state in the dynamical system).

Brain Level 2: Language
Language emerges when the brain evolves distinct behaviors based on sensory input patterns caused by other entities. The basic ability to communicate information among individuals has the potential to augment the individual's representation of the state of the world with key information that can improve decision making (e.g., the task of information acquisition can be shared among many individuals). This is a necessary step towards the beginning of cross-generational information transfer, or "culture."

Brain Level 3: Structural learning
Previously, learning was possible based only on very short-term effects (e.g., cell membrane voltage). Now, brains are able to store information indefinitely by making structural changes to themselves based on experiences. This provides the brain with the ability to make decisions (about which action to perform next) based on current sensory information AND past experiences. Individuals can learn from mistakes. This takes some of the burden off genetic evolution; instead of evolving entities whose genes are tuned for very specific environments, it can instead evolve entities whose brains have a certain degree of adaptability, making them successful in a wider variety of environments.

Brain Level 4: Explicit goal representation
Previously, the brain's goals were implicitly determined by genetic evolution (produce behaviors that help proliferate genes). Brains that did not meet these goals were wiped out. Now it is possible to represent goals explicitly in the brain via reward signals (e.g., dopamine). This new brain sub-component is a mapping from certain sensory input patterns ("reward states") to a scalar reward signal. When this reward value is high, the entity's current actions are reinforced. So any situation that increases the brain's reward level will reinforce the recent actions that led to it. This operates on the existing switching control system by adjusting the probabilities of switching to each individual action in any given situation. Interestingly, the specification of which sensory situations produce reward signals is arbitrary and can be defined genetically.

Genetic Evolution Level 6: Genetic determination of brain goal states
Now, instead of having to make complex, global changes to the brain in order to add new goals, genetic evolution can now just modify the mapping from sensory state to reward. The brain's explicit goal representation, which is genetically defined, provides a simple unified architecture for adding, deleting, and adjusting its goals.

Brain Level 5: Value learning
Adjusting the action switching system based solely on the current reward signal is not always ideal; sometimes it is important to take a series of actions that produce no reward in order to achieve a larger final reward. It is possible to circumvent this issue by learning an internal representation of the "value" of various situations. This "value function" (in mammals, the striatum/basal ganglia) is a mapping in the brain from the current sensory experience to an estimate of future reward. Now the action switching system can operate on the long-term expectation of rewards, not just immediate rewards, resulting in individuals which are able to achieve their goals much more effectively.

Brain Level 6: Motor automation
Repetitive motions are often represented as reflexes, but often it is important for an individual to learn novel motions during the course of a lifetime. A special brain structure (in mammals, the cerebellum) enables repetitive motions to be automated. Although the combination of value learning and action switching is a very flexible system, it can be wasteful for these well-learned motion sequences. This new brain structure provides a general-purpose action sequence memory that can offload these sequences, performing them at the appropriate time while allowing the action selection mechanism to focus on novel decisions.

Brain Level 7: Simple context representation
Any brain component that depends upon the state of the environment will perform better given an improved internal representation of the environment. Since the state of the external world cannot be accessed directly by these components, they are only as good as the brain's "mental model" of the world. So a specialized brain structure (in mammals, the archicortex/hippocampus) that can classify the state of the world into one of several distinct categories will help the brain represent value and choose actions more effectively.

Brain Level 8: Advanced context representation
Any further improvement of the "mental model" of the world is exceedingly valuable to decision making entities. An outgrowth of the initial simple pattern classifier appears (in mammals, the 6-layered cerebral cortex). This enhanced version extracts information content, computes degrees of belief about the world, and presents a summary in a simplified (linearly separable) form for use by other brain components like the value learning and action switching system.

Brain Level 9: Information-based rewards
A new explicit goal in the brain appears as a reward signal based on the information content provided by the entity's sensory inputs. The entity is thus intrinsically motivated to explore unexplored places and ideas. Before this new motivation, behaviors were mainly focused on survival and reproduction with little need for acquisition of new information. Now there is a strong drive to explore the world, simultaneously training and improving the brain's mental model. (An advanced context representation is useless without the motivation to fill it with information.)

Brain Level 10: General purpose working memory
All kinds of decisions can be improved by considering various action sequences before physically executing them. This requires the ability to simulate various state of the world within the internal representation, including sequences of actions and their expected consequences and rewards. This type of simulation is accomplished in a special short-term memory array (in mammals, the prefrontal cortex) that can be written to and read from. The read/write operations are extensions of the old action selection system: now, instead of being limited only to physical actions, the brain has acquired the mental actions "read from memory cell" and "write to memory cell."

Technological Evolution Level 0: First appearance
With the advent of working memory, the current combination of brain structures allows the construction and execution of extremely complex action sequences. This includes several unique new abilities. It is possible for individuals to make physical artifacts ("tools") from materials in the environment, enhancing the effectiveness of the body in various ways: extended reach, impact force, and leverage. Simultaneously, it provides an extension to the brain itself: the "tool-enhanced brain" has an extended long-term memory because it can record information in the environment rather than relying on brain-based memory. This greatly enhances the accuracy of cross-generational information transfer, which was first enabled by the onset of language. The accumulation of knowledge concerning advanced tool production results in a new intelligent process: technological evolution. The goal of this new evolutionary process is to produce artifacts that are the most stable in the space of the parent process's goals (i.e. the goals of the human brain), i.e. tools that provide the most benefit to humans.

Technological Evolution Level 1: Computing machines
The continual generation of new knowledge (driven by information-based rewards, collected/recorded/analyzed/organized with physical tools, and shared across multiple generations) enables the creation of increasingly complex physical artifacts. These artifacts are increasingly helpful to humans in achieving their goals (eating, socializing, reproducing, acquiring information, etc.), which support the goals of genetic evolution (proliferation of the most stable genes), which is confined by the simple goal of universal stability-based evolution. The evolution of technology operates at a scale much faster than genetic evolution, so it produces the equivalent of the next addition to the brain before genetic evolution has a chance. This product, the computing machine, is an extension to the most advanced area of the human brain, the prefrontal cortex. It allows the execution of arbitrary algorithms and simulations much more quickly than the prefrontal cortex itself, enabling humans to solve all kinds of symbolic problems more quickly and effectively.

Technological Evolution Level 2: Intelligent computing machines
Technological evolution continues to produce increasingly useful artifacts until a milestone is reached: an artifact with the same degree of intelligence and autonomy as the human, i.e. a human-level artificial intelligence. This artifact boosts the ability of humans to achieve their goals in an exponential way: machines continually design and build the next generation of machines, each better/faster/cheaper than the last. The artifact itself represents the next child intelligent process with goals defined by the parent intelligence (technological evolution), which could include anything that helps (or at least does not harm) its parent process in achieving its goals.

What's Next?
I won't try to speculate here about specifics, but it is expected that, barring some major catastrophe, the same overall process continues: intelligent processes tend to produce other intelligent processes which help achieve the goals of the parent process (or at least don't contradict them).

Lineage

The following is an abridged lineage of the intelligent processes described here, starting from the oldest ancestor:

1. Physical Evolution (goals: physical stability)
2. Genetic Evolution (goals: proliferation of genes)
3. Brains (goals: eat/sleep/avoid pain/socialize/reproduce/acquire information/etc.)
4. Technological Evolution (goals: help humans achieve their goals)
5. Intelligent Computing Machines (goals: arbitrarily defined by tech evolution)

(Not listed here are all kinds of cultural evolution, including language, music, the free market, etc. Each of these represents a separate branch of the intelligence tree, which, like the others, must not violate the goals of the parent intelligent process.)