Monday, February 14, 2011

Practical Solutions to Hard AI Problems

An observation: the practical solutions to several recent hard AI problems seem to favor ensemble-based approaches. For example:
  • Data compression: record-setting PAQ-based compressors are "context mixing" algorithms that combine several context/prediction models.
  • Netflix Prize: the winner used a blend of many different predictors.
  • IBM's Watson Jeopardy player: their DeepQA architecture considers several hypotheses simultaneously and chooses one with highest confidence score, or none if max confidence is too low.
In each case the solution is not to focus on one really good model but rather a mixture of several independent models. Maybe the reason is just that inductive learning is an under-determined problem. For any given data set there are many possible explanatory models. Occam's razor/Kolmogorov complexity tells us to assume the simplest model/program that could have generated the data. This assumption might be wrong, especially with limited data, but given limited data it's the only sane thing to assume.

For any data set the number of models is often so large that we can't just enumerate them and pick the simplest. We have to sample the model space somehow. If we have prior knowledge of good models, we should use it; otherwise we can sample randomly (e.g. random forests).

Matt Mahoney (PAQ inventor): "Modeling is provably not solvable." There is no computable procedure that finds the optimal model for any problem, and there's no way to tell if any model is the best/simplest.

Marvin Minsky: "The basic idea I promote is that you mustn't look for a magic bullet. You mustn't look for one wonderful way to solve all problems. Instead you want to look for 20 or 30 ways to solve different kinds of problems. And to build some kind of higher administrative device that figures out what kind of problem you have and what method to use."

Monday, June 29, 2009

IDSIA Postdoc, iBonsai, PhD Prelim, Sapience Engine, etc.

It's been several months since I posted here, mainly because I've been busy working on my research code and my PhD preliminary proposal/presentation. Here's a quick summary of things since March...

In March I applied for a postdoc position at Juergen Schmidhuber's lab at IDSIA in Lugano Switzerland. At the AGI 2009 conference I met with Juergen to interview for this. Shortly afterwards I was offered and accepted the position, starting in January 2010! So now I need to hurry and wrap up the PhD before then, which is stressful (because I hate trying to write up work until it's truly ready), but also good (because I'm gonna go crazy if I'm in school much longer). The postdoc will involve research related to artificial curiosity, which is very exciting to me, especially considering that I'll be working with Juergen, one of the most important researchers in the field. I'll be working with the iCub robot (pictured below) at IDSIA for much of the time. (Next month I'll be attending the RobotCub summer school to learn more about the iCub.)

iBonsai (details here) was released for the iPhone/iPod touch on December 28, 2008. My primary motivation for making this was to make money so I can continue my AI research unconstrained. (Despite the fact that I'm in it for the money, I can't stand to put out a sub-par app... I did spend several months of nights and weekends tweaking the algorithm and visual style, and I'm proud of the result.)

After 6 months of sales I can't say I'm ready to retire off the sales, but I'd say it has done better than expected. The picture below shows the top 100 apps list in the iTunes App Store on 6-9-09. Note that on this day iBonsai is right behind Myst and is beating several popular apps like Fieldrunners, Koi Pond, Ocarina, and SimCity. This high rank is mainly due to being featured on the App Store during that week (in Staff Picks in the US, What's Hot in several other countries). Overall iBonsai has been downloaded 95,000 times (mostly free downloads during short promotional periods to gain exposure).

I would really like to make some more apps, though I don't know when I'll find time to do it. I made a list of possible app ideas last fall and decided to start with something small (iBonsai) just to test the waters. Maybe when I finish the postdoc I'll continue iPhone app development as a source of income. Or maybe I'll find time on the weekends. We'll see. I must say it is much more satisfying to be able to point to a real product and say, "Wanna see my iPhone app?" instead of, "Wanna see this project I've been working on? I posted the source code on my website. You have a C++ compiler, right?"

PhD Preliminary Proposal
I wrote up a research proposal for my PhD in April and May, then presented it to my committee in early June. The title of the proposal (and tentative title of the final thesis) is "Sapience: A Brain Inspired Cognitive Architecture." The proposal is mainly a high-level description of my Sapience Architecture, which represents the bulk of the conceptual work I've been doing over the past several years. This architecture is fully implementable; each component can (and has been) implemented in software and tested in simulated or robotic bodies. It is comprised of 5 main components inspired by the brain's sensorimotor cortex, hippocampus, basal ganglia, cerebellum, and prefrontal cortex. And it's intended to be useful, with behavioral shaping provided by programmer-defined reinforcements.

It's been a little while since I've written up any research, so it was a bit of a struggle to convert my ideas into such a formal document. I feel like I'm good at writing, given enough time (and admittedly, this proposal could have used more time...); it's just very mentally taxing to produce a document worth other people's attention. One thing that helped was to create diagrams. I made a high-level architecture diagram (shown below) and a more detailed one for each of the main components. I think these really help clarify the design much better than words alone.

I'm glad to have written this proposal because it forced me to articulate several of my underlying assumptions. For example, I describe the "purpose" of this architecture in terms of reinforcement learning; the system's objectives include external reinforcements (i.e. programmer-defined goals) which makes the system practically useful, and internal curiosity reinforcements which provides a source of autonomous self-development even without a human teacher. Corresponding to these rewards are two useful metrics for measuring learning progress: external reward intake (goal achievement rate) and world model improvement rate (information gain).

At this point I have a software implementation of each of the 5 components, along with the necessary code to integrate them into a cohesive "software brain" (described next). Now the primary work involves testing each component in isolation and in combination with the other components. Part of the testing process will involve a simulated human arm, complete with proprioceptive, tactile, and vision sensors and servo motors (shown below).

Sapience Engine
For my PhD thesis I'm calling the conceptual work the Sapience Architecture, which is basically a blueprint for a practical, engineered thinking system. The corresponding software implementation of this architecture is called the Sapience Engine. I plan to use this software as a research platform for my postdoc position at IDSIA, hopefully applying it to the iCub.

I'm pretty excited about this software. When I started grad school in 2003, I originally wanted to build a black box software brain, and that's exactly what the Sapience Engine is. It's shaping up to be a pretty powerful system, scalable to high-dimensional inputs and outputs, with a very general behavioral shaping system. The API is very simple. It's written in C++ and has Python bindings (via ctypes). It has built-in multithreading and runtime performance profiling. It has been structured in a way that can be easily adapted for clustered computers (e.g., via MPI). Several pieces could even be GPU-parallelized.

I also have written a very useful real-time probe tool, designed as a client/server pair: I can link any Sapience Engine-powered program with the Sapience probe server, which provides access to various internal variables, then run a separate Sapience probe client (earlier version pictured below) which displays real-time plots, 2D array visualizations, neural network adaptation, etc. This "brain debugger" is turning out to be a crucial tool in the testing process.

I Don't Want to Do Academic Research
Ok, this statement doesn't really make sense considering that I'm currently finishing a PhD and preparing to start a postdoc position. I guess what I mean is that the idea of being a traditional academic researcher does not appeal to me. I hate the idea of structuring my research in order to generate publications at a fixed rate. That just doesn't make sense to me. (Well, it makes sense in general in that it makes the academic system scalable to lots of researchers, but I don't think it's the best strategy for every project or person.) Also, I think my medium of choice is software development, not publications. I would rather have a list of open source software projects on my website than a list of papers. A well-designed, well-written, timely software release can have an overall impact similar to that of an influential paper.

I would prefer to work on big ideas, to work on them long enough to see them succeed (even if that means several years without publishing), then write up papers after the fact. Some ideas just seem to work better that way. Lots of trial and error and engineering work up front, then scientific analysis later. For example, the automobile, the world wide web, the large hadron collider... It doesn't always make sense to do a lot of scientific work until the engineering work is complete. And the same might be true for general AI.

On a personal level, I think what bothers me most about traditional academic research is the expectations. I work best when I feel unconstrained by external pressures (to make money, to publish papers, to reach certain artificial objectives, etc.). In that sense, I suppose what I'm looking for long-term is complete freedom of expression within the realm of AI research. I expect my work to be very useful, but I just don't want to feel constrained. I want to be an engineer with the motivation of an artist. Over the next several years I intend to find a way to make that happen, ideally by making a big chunk of money up front to support several years of unconstrained creativity.

Monday, March 23, 2009

AGI 2009 Conference

I attended the 2nd AGI conference a few weeks back. The goal of the conference is to help organize the field of artificial general intelligence (AGI).

Juergen Schmidhuber (pictured above) gave the keynote, entitled "The New AI." I was especially interested in the part of his talk on artificial curiosity. Marcus Hutter gave two good paper presentations, "Feature Markov Decision Processes" and "Feature Dynamic Bayesian Networks." John Laird's talk on the SOAR architecture included a helpful definition of cognitive architectures (specific set of fixed mechanisms) vs. frameworks (neural nets, rule-based systems, etc.). There was an interesting mixture of people there (maybe 100 total?) from academia and AGI-based startup companies and organizations.

In order for the AGI field to move forward in a cohesive, organized way, it will be important to define standard evaluation metrics. (The conference session on creating an "AGI preschool" discussed this issue.) This seems to be one of the biggest hurdles in the near term. Communication among researchers is already difficult since nearly everyone uses a different set of terminology, but the lack of standard evaluation and comparison methods makes it even more difficult. Producing solid metrics might even be the most crucial step here... once you know what you're measuring, it's much easier to work towards that goal. However, general intelligence is really hard to measure, even in humans. The best starting point is probably to start from our current definitions of general intelligence, which are usually in the form of "an agent must be successful at achieving goals/maximizing rewards in a wide variety of environments." So I'm thinking that a good practical approach is the one described in a paper by Thomas Hazy, Michael Frank, & Randall O'Reilly (Towards an Executive Without a Homunculus: Computational Models of the Prefrontal Cortex/Basal Ganglia System): "To the extent that the same basic model can account for a progressively wider range of data, it provides confidence that the model is capturing some critical core elements of cognitive function." So we can build a standardized, ever-growing repository of small tasks, each with a clear measure of success/failure (either binary or scalar). Then we can subject our AGI systems to the entire test set and measure general intelligence performance as the fraction of tests passed. Our confidence that the metric is useful should then be proportional to the number and variety of tasks in the test set. I can't think of a better, simpler way to measure general intelligence than this.

At the conference I showed a live demo of my sensory cortex model learning from natural images, along with the following poster (full-sized image available from my website):

Friday, February 20, 2009

Practical Mind Control

Effective mind control (for any purpose) is not about making people do things they don't want to do. It's about changing what they want. Then they think they still have free will over their own decisions.

To accomplish this, simply expose them to your idea/product/meme repeatedly. Here's how it works:
  • The more we experience something, the better we can imagine it.
  • The better we can imagine something, the more we choose to think about it.
  • The more we think about something, the more it influences our actions.

The more we experience something, the better we can imagine it.
Our brains build representations of things as they receive data samples of those things; sensations physically change our mental hardware. The more samples from a particular data source (foods, music styles, visual art styles, places), the more accurate our mental representation of that source.

The better we can imagine something, the more we choose to think about it.
Our minds are attracted to some thoughts over others.  The question of which thoughts are most attractive can be answered by the theory of curiosity rewards: our brains produce internal rewards as long as they can improve at predicting new data. The better our mental representation of something, the more we are able to notice, appreciate, predict, and enjoy its complexities (for example, the complex flavor patterns in coffee, wine, chocolate, olives, and cheese). These situations can be intrinsically rewarding as long as we get better at understanding/predicting them. (When we can no longer improve, boredom ensues.)

Thus, we perform a type of unconscious mental rehearsal of a small subset of possible thoughts. Our attention is most focused on those ideas which provide the most rewarding progress towards better prediction, which tend to be the ones about which we have significant experience (and thus mental representation). As we continually gravitate towards those ideas, they become the easiest to recall.

(In some cases there might even be a positive feedback loop here: the better we can imagine something, the more it evokes curiosity rewards, the more we want to experience it, the more we do experience it, the better our mental representation, the better we can imagine it...  The process is bootstrapped by an outside force which provides the initial exposure, but this feedback loop keeps it going.)

The more we think about something, the more it influences our actions.
The burden of making a decision (e.g., which of several products to buy) is lessened by having a short list of options in mind.

If the cost of making a decision is factored in (which is usually the case), we must find a balance between picking the best option and minimizing the time needed to make the decision itself. The weighting of these two factors depends on the cost of the decision outcome vs. the cost of wasting time making the decision; we can afford to spend more time if the decision outcome is more important.

More exposure to/thinking about one option makes it easier to recall vs. others, which shortens the decision process, possibly to the point where the other options are not even worth recalling...

By the transitive property, we tend to choose things we have experienced most. Thus, to make people like/choose your idea/product/art, simply expose them to it repeatedly.

George Costanza utilizes this mind control technique in the Seinfeld episode The Chicken Roaster. Heather: "Alright George, I'll be honest. The first time we went out, I found you very irritating, but after seeing you for a couple of times, you sorta got stuck in my head... Co-stanza!" (to the tune of "By Mennen").

Note that this only works for emotionally/value neutral things.  If a person already has an aversion to something, simple exposure might not be enough to make it attractive. But repeated exposure to an initially neutral thing/place/idea tends to make it more attractive than other still-neutral options.

This is the essence of advertising in general, the idea that "any press is good press," and the rich-get-richer type of driving force behind all kinds of pop culture phenomena. It is practical on a personal level (self mind control) in terms of discovering new tastes. Our dislike for certain things should not be considered an intrinsic property of ourselves, but rather a set of tastes which we have not yet acquired. Expect not to like things at first; everything is an acquired taste which can be enjoyed with a certain amount of practice (although the investment might not always be worth the effort).

Wednesday, December 31, 2008

Brainpower Labs LLC

I decided to go ahead and start a company. I was planning on doing this eventually after I finish my PhD, but it seems like a good idea just to get the ball rolling now. The plan is to commercialize my AI research.

Back in 2004 and 2005 I just assumed that my research would always be open source. I released my master's thesis implementation as the open source project Verve. I posted on this blog in detail about every idea and simulation result. Monetizing this work didn't appeal to me; I was just happy to be able to work on something as grandiose as general AI.

I slowly came to realize that after graduation, I won't be able to sustain my progress without getting a day job. I really want to continue this research full-time, and I really do not want to divide my attention in order to make an income. So what are my options?

I worked on the Netflix Prize for a while (and still do periodically)... a quick $1,000,000 would be a great source of initial funding. (My best result so far is 0.5% better than Netflix's own algorithm, but I need to reach 10% for the win.) Going for such a big prize is a lot of fun, but I'm not betting on it. I still need something a little more predictable.

When the iPhone SDK was announced, I didn't really consider iPhone app development. It seemed like a lot of work and a big distraction from research. But when the iPhone App Store was launched, I started watching the list of top paid apps. Many of them are very simple but seem to sell pretty well. I started thinking more seriously about starting an LLC and building a few simple apps. It would be a good way to get my feet wet in the free market (learning what people want, as Paul Graham would say), plus I would learn the iPhone SDK and be able to use it for other projects. That's not to mention the psychological benefits of making things more concrete... feeling part of the real world and less like a graduate student.

So in October I decided to pull the trigger and formed Brainpower Labs LLC. Our first product, iBonsai, is definitely not AI-related, but it has been a good first app for me to learn the iPhone SDK, OpenGL ES, and the iPhone's performance limitations. (The thing is very impressive for a handheld device, by the way.) It's a little distracting right now switching between AI research and iPhone app development, but I'm hoping the two efforts converge at some point. I'm thinking our next app will be based around some simple AI techniques.

I'm really glad to have started the company now rather than after graduation. Besides having a new source of income (fingers crossed) to ease the transition out of grad school, I now have an immediate outlet for turning research ideas into real applications. As my core intelligence architecture progresses (which I've dubbed the Sapience Engine), I'll be able to use it to produce increasingly interesting applications, which could be iPhone apps, desktop computer software, console video games, or robotics applications.

Monday, December 29, 2008

iBonsai Version 1.0 for iPhone and iPod touch

UPDATE (1/5/09): iBonsai was featured on and

I just finished iBonsai, a new app for the iPhone and iPod touch, which is now available on the App Store:

For more info about this app and my long-term business plans, visit my new company website.

iBonsai Description
Bonsai is the Japanese art of miniaturizing trees by growing them in small pots. Now you can create your own 3D miniature trees right on your iPhone or iPod!

With a tap of your finger, iBonsai's sophisticated generative algorithm begins growing a unique digital tree. No two bonsai trees are the same! After about 30 seconds of growth, your mature bonsai becomes a beautifully rendered image in the sumi-e style of Japanese brush painting.

Enjoy the zen-like relaxing nature of this ancient art form.

- Simple, clean interface.
- Interactive 3D view. Rotate/zoom to see your trees from all angles.
- Many different leaf types: japanese maple, flowering dogwood, and more (even a rare money tree...).
- Shake your iPhone/iPod to scatter leaves!
- Save images of your favorite trees, then use them for your background.
- Optional gravity-based viewing mode makes the tree appear to float in space.
- Advanced generative algorithm and random number generator give you totally unique results every time. Produce virtually infinite trees!

Tuesday, November 25, 2008

Progression of Intelligent Processes

The purpose of this article is to discuss the possible roots of intelligence in the universe. As with most fuzzy concepts like "intelligence," we must begin by producing several basic formal definitions which we can then use as tools to build more powerful concepts. We attempt to use these definitions to classify intelligent processes and their advancements over time. The result of this thought process is the idea that an intelligent process tends to produce other intelligent processes with goals that also fulfill the goals of the original intelligent process.


System: any collection of interacting components.

State: a complete description of all components of a system.

Process: a sequence of changes to the state a system.

What's the difference between a "random" and "non-random" process? It depends on the level of detail of the analysis. If we analyze a complex process operating on a large system with a crude level of detail, representing it with too few variables, then we lose the ability to model all predictable effects. Thus, we must quantify the results with a certain degree of uncertainty/randomness simply because our model lacks enough detail. However, given infinite resources used to analyze processes, any process becomes non-random.

Stability: a property of the state of a system, proportional to its ability to remain relatively unchanged over time when perturbed only by a completely random process.

A process can change a system from a stable state to an unstable one, or vice versa. It can move a system through a trajectory of stable states. It can even, in a way, give birth to other processes: it can lead a system to a state where other "child" processes are able to act on it as well.

Goal: a target state of a system.

Intelligence: a property of a process, proportional to its ability to move a system closer to a pre-defined goal state.

Goals are defined by outside observers. For any particular analysis of any particular process, we can define whatever goals we want and determine the degree of intelligence present. Thus, intelligence is in a sense arbitrarily defined, but it can be a useful measurement as long as the goals are well-specified. We can say that a process has zero intelligence if: 1) it has no goals defined, or 2) it is completely unable to move a system towards a defined goal.

Now let's take a step back and think about our universe as a whole. The universe is defined by its initial conditions and its fundamental forces: the initial conditions specify the initial state of all systems, and the fundamental forces constrain the possible processes that can act on those systems. Interestingly, there seems to be a universal process which tends to favor systems in stable states. Imagine the state of any system as a point on a surface (or landscape), where the lowest points are more stable than the peaks. This universal process forces all systems down the slopes of their stability surfaces towards the (locally) minimum points. If the minimum is bowl-shaped, the system will stop changing when it reaches that minimum. (The minimum might be a valley, though, so there can be room for the system to wander around its state space within the valley and remain stable. This is a key requirement for a process to birth another process: when the parent process succeeds in reaching the minimum, the child process can begin to explore the valley.) Systems might go through unstable states temporarily, but they will tend towards the most stable. So what is this universal process which favors stability?

Evolution: a process which tends to change the state of a system to increase its stability, i.e. an intelligent process whose pre-defined goal state is the one with maximum stability.

The universal process described above seems to select the most stable states, allowing them to last, while discarding the unstable ones. We can view it as a type of evolution, "physical evolution," the first intelligent process within the universe.

Brain: a system which supports a specific variety of intelligent process. This process represents a transformation from input space (sensory) to output space (motor). Brains are produced by some other intelligent process which specifies their goal states.

Note that evolutionary and brain-based intelligence are processes, and remember that processes can give birth to other processes. In general, processes with a high degree of intelligence tend to lead to the existence of other intelligent processes. A single parent intelligence can even produce an entire family tree of intelligent processes. Furthermore, the child processes tend to be given goal states from a subset of the parent's goal states. A child process whose goals violate those of the parent will not last.


The following list describes the progression of intelligent systems in the universe. Each stage represents the result of some combination of events which produces a kind of intelligence phase change. There can be multiple intelligent processes at work simultaneously (e.g., the original physical evolution, its child "genetic evolution,", etc.). We intermix the more significant advances of each process as they appear. The list is, of course, not exhaustive; it represents a very small sample of all the complex processes in existence. Since we must choose a small subset, we choose to focus on those events that are most interesting to humans, i.e. those involved in the generation of our own existence.

Note that the goals of each child intelligent process cannot oppose the goals of the parent process without being destroyed. Also note that a key ingredient to many forms of intelligence, including evolutionary processes and more advanced brain-based intelligence, is the random exposure to new situations which enables trial-and-error-based decision making.

Physical Evolution Level 0: Initial state of the universe
The "null" stage. The universe is in its initial state, determined by some unknown process, waiting for the fundamental forces to begin acting.

Physical Evolution Level 1: Clusters
As soon as the fundamental universal forces start acting, physical evolution appears. Gravity produces clusters of particles from the initial state of matter. Larger clusters have more pulling force than smaller ones; the large get larger, and the small get sucked into the larger ones. Eventually physical evolution produces its first result in achieving its goal of stability: the universe becomes a stable collection of clusters of matter separated by empty space.

Physical Evolution Level 2: Stable molecules
Once the cosmic-scale events have settled down, interesting things begin happening at the microscopic level. Atoms are constantly colliding, "trying out" new ideas for molecules. The molecules that last longer are more stable; if they stay around just a little bit longer than others, they will become more common. So we begin to see physical evolution performing a type of selection process on molecules. Those that are most stable proliferate, and the unstable ones disappear. Each stable molecule flourishes in a certain habitat. There can be multiple stable "species" of molecules that coexist, possibly with symbiotic relationships.

Physical Evolution Level 3: Stable structures
Now that there are stable molecules available, physical evolution can operate on combinations of molecules. Random collisions of molecules produce all kinds of physical structures, some stable, and some not. Physical evolution again selects the more stable structures to proliferate, resulting in a new kind of battle for survival.

Physical Evolution Level 4: The cell wall
A molecular structure is produced that acts as a shield against bombardment: the lipid bilayer. Any collection of molecules with one of these protective shells (i.e. a crude "cell wall") gains a massive advantage over others in terms of mean lifespan. Their existence is still relatively short but much longer than before. In certain hospitable environments, these stable "cells" become common.

Physical Evolution Level 5: Birth of genetic evolution
With the protective cell wall in place, physical evolution can begin to experiment with various modifications to the internal cell structures. As before, any changes that increase the lifespan of the cell produce more long-term stability of that design. The game-changing event at this stage is the appearance of intra-cellular structures which support information-based representations of physical structures (primitive DNA). Physical evolution has given birth to a new intelligent process: genetic evolution.

Genetic Evolution Level 0: Gene expression
The presence of some molecule results in the production of some corresponding physical structure. This is the essence of gene expression. Any cell containing molecule X produces structure Y. Such a procedure is enabled by a combination of structures that acts as a gene expression machine. The details of the actual transformation (from X to Y) are unimportant as long as X reliably produces Y. Cell stability is still fundamentally tied to its physical properties, but with this gene expression machine in place, it is now indirectly tied to the presence of certain genetic molecules. Long-term survival under these new rules depends on having the right combination of these "genes." If having gene X implies having structure Y, and structure Y is related to a stronger cell wall, better repair mechanism, faster acquisition of resources, etc., then cells with gene X will become more common. Essentially, this new intelligent process operates in gene space, which is just a proxy for physical space. Genetic mutations, random events that modify a cell's genetic material, are an essential part of exploring this new gene space. The goals of the new genetic evolution intelligence (proliferation of stable genes) still fall within the constraints of its parent's goals (proliferation of stable structures).

Genetic Evolution Level 1: Cell replication
When cells develop the machinery to copy their own genetic information, they become able to copy their physical structures as well. This produces an explosion in the speed at which genetic evolution can operate. The "fitness" (relative long-term stability) of a given genetic solution is multiplied by the number of instances of that solution: a large population of short-lived physical entities is now more stable than a single long-lived non-replicating entity. The feedback loop (successful genes produces more numerous, stable physical populations, which generate more copies of those genes, and so on) means that systems with replication quickly outnumber non-replicating systems.

Genetic Evolution Level 2: Motility
Moving around increases the probability of acquiring resources, resulting in an increased ability to build and repair structural elements. Motility is possible even without feedback-based control: simply moving around quickly without any particular target is much better than sitting still. One possible form of early motility is enabled by a basic type of short-term memory: the charging/discharging cycles of cell depolarization, tied to some physical deformation of the cell, produces a variety of repetitive motions, some of which tend to move the cell around in its environment.

Genetic Evolution Level 3: Brains
Targeted acquisition of resources based on sensation is a more advanced form of motility which usurps simple blind movement. When external events are allowed to affect cell depolarization (which drives motility), a feedback loop is present in the system. This presents a new domain for genetic evolution: the control system between sensation and action - the brain. Brains are defined by genes, just like all other structures in the system, so genetic mutations changes can can act upon the parameters of these controllers. Genetic changes are favored that improve the control system in a way that results in more copies of those genes. We consider the transformation process (inputs to outputs) performed by the brain as a child intelligent process of its parent, genetic evolution. The initial brain's implicit goals involve acquiring energy and materials, but can potentially involve anything needed by genetic evolution. Any changes in the brain's parameters are constrained to help achieve the goals of genetic evolution.

Brain Level 0: Simple feedback control
The simplest brains are basic feedback control mechanisms (e.g., P/PD/PID controllers) which transform some sensory input signal into an output control signal. Initial possible "behaviors" for entities evolved in different environments include chemotaxis (chemical gradients), thermotaxis (temperature gradients), phototaxis (light gradients), etc. These feedback-based behaviors provide much more resources for the entity, increasing its long-term stability over those without such skills.

Genetic Evolution Level 4: Sexual reproduction
Targeted motility enables sexual reproduction. The success of a gene may depend upon the presence of other genes. In general, more complex structures must be represented by larger chunks of genetic material. The evolution of an entity's genetic material is a slow process when based on asexual reproduction and mutations alone; furthermore, the emergence of complex structures from mutations of an individual genome is fairly improbable. However, the emergence of genetic crossover, the ability of two physical entities to exchange chunks of genetic information, dramatically increases the probability of producing more complex structures. This procedure represents a wormhole through gene space through which information can warp. The result is that genetic material present in two separate entities, which might produce simple structures in isolation, can combine synergistically to produce much more complex structures in the offspring entities. Genes are now favored that optimize the control system towards more sexual reproduction, e.g., producing an implicit goal in the physical entities of maximizing intercourse.

Genetic Evolution Level 5: Multi-celled entities
A collection of cells that functions together in close proximity provides a benefit to each cell involved. By "cooperating" in a sense, many cells can share the overhead costs of staying alive, amounting to a type of microscopic trade agreement. While sharing the burdens of life with nearby cells, they can start to specialize into different roles, increasing the variety of macro-scale multi-celled structures.

Brain Level 1: Discrete switching control
With the appearance of multi-celled entities, the brain can now consist of a network of specialized neural cells which communicate via synaptic transmission. Each neural cell represents a nonlinear transformation of inputs to outputs, and the collective activity of the neural network can be viewed as a dynamical system. Such dynamical systems can have many stable states. Thus, instead of using a single feedback-based controller, the brain has now evolved multiple discrete control systems ("reflexes"). Each one is used in a different situation (e.g., feeding, swimming, mating, fleeing). The "decision" of when to switch is still solely influenced by the current (or very recent) sensory information; when the entity is exposed to a certain situation (i.e. a certain pattern of sensory inputs), its brain switches to a different stable feedback loop (attractor state in the dynamical system).

Brain Level 2: Language
Language emerges when the brain evolves distinct behaviors based on sensory input patterns caused by other entities. The basic ability to communicate information among individuals has the potential to augment the individual's representation of the state of the world with key information that can improve decision making (e.g., the task of information acquisition can be shared among many individuals). This is a necessary step towards the beginning of cross-generational information transfer, or "culture."

Brain Level 3: Structural learning
Previously, learning was possible based only on very short-term effects (e.g., cell membrane voltage). Now, brains are able to store information indefinitely by making structural changes to themselves based on experiences. This provides the brain with the ability to make decisions (about which action to perform next) based on current sensory information AND past experiences. Individuals can learn from mistakes. This takes some of the burden off genetic evolution; instead of evolving entities whose genes are tuned for very specific environments, it can instead evolve entities whose brains have a certain degree of adaptability, making them successful in a wider variety of environments.

Brain Level 4: Explicit goal representation
Previously, the brain's goals were implicitly determined by genetic evolution (produce behaviors that help proliferate genes). Brains that did not meet these goals were wiped out. Now it is possible to represent goals explicitly in the brain via reward signals (e.g., dopamine). This new brain sub-component is a mapping from certain sensory input patterns ("reward states") to a scalar reward signal. When this reward value is high, the entity's current actions are reinforced. So any situation that increases the brain's reward level will reinforce the recent actions that led to it. This operates on the existing switching control system by adjusting the probabilities of switching to each individual action in any given situation. Interestingly, the specification of which sensory situations produce reward signals is arbitrary and can be defined genetically.

Genetic Evolution Level 6: Genetic determination of brain goal states
Now, instead of having to make complex, global changes to the brain in order to add new goals, genetic evolution can now just modify the mapping from sensory state to reward. The brain's explicit goal representation, which is genetically defined, provides a simple unified architecture for adding, deleting, and adjusting its goals.

Brain Level 5: Value learning
Adjusting the action switching system based solely on the current reward signal is not always ideal; sometimes it is important to take a series of actions that produce no reward in order to achieve a larger final reward. It is possible to circumvent this issue by learning an internal representation of the "value" of various situations. This "value function" (in mammals, the striatum/basal ganglia) is a mapping in the brain from the current sensory experience to an estimate of future reward. Now the action switching system can operate on the long-term expectation of rewards, not just immediate rewards, resulting in individuals which are able to achieve their goals much more effectively.

Brain Level 6: Motor automation
Repetitive motions are often represented as reflexes, but often it is important for an individual to learn novel motions during the course of a lifetime. A special brain structure (in mammals, the cerebellum) enables repetitive motions to be automated. Although the combination of value learning and action switching is a very flexible system, it can be wasteful for these well-learned motion sequences. This new brain structure provides a general-purpose action sequence memory that can offload these sequences, performing them at the appropriate time while allowing the action selection mechanism to focus on novel decisions.

Brain Level 7: Simple context representation
Any brain component that depends upon the state of the environment will perform better given an improved internal representation of the environment. Since the state of the external world cannot be accessed directly by these components, they are only as good as the brain's "mental model" of the world. So a specialized brain structure (in mammals, the archicortex/hippocampus) that can classify the state of the world into one of several distinct categories will help the brain represent value and choose actions more effectively.

Brain Level 8: Advanced context representation
Any further improvement of the "mental model" of the world is exceedingly valuable to decision making entities. An outgrowth of the initial simple pattern classifier appears (in mammals, the 6-layered cerebral cortex). This enhanced version extracts information content, computes degrees of belief about the world, and presents a summary in a simplified (linearly separable) form for use by other brain components like the value learning and action switching system.

Brain Level 9: Information-based rewards
A new explicit goal in the brain appears as a reward signal based on the information content provided by the entity's sensory inputs. The entity is thus intrinsically motivated to explore unexplored places and ideas. Before this new motivation, behaviors were mainly focused on survival and reproduction with little need for acquisition of new information. Now there is a strong drive to explore the world, simultaneously training and improving the brain's mental model. (An advanced context representation is useless without the motivation to fill it with information.)

Brain Level 10: General purpose working memory
All kinds of decisions can be improved by considering various action sequences before physically executing them. This requires the ability to simulate various state of the world within the internal representation, including sequences of actions and their expected consequences and rewards. This type of simulation is accomplished in a special short-term memory array (in mammals, the prefrontal cortex) that can be written to and read from. The read/write operations are extensions of the old action selection system: now, instead of being limited only to physical actions, the brain has acquired the mental actions "read from memory cell" and "write to memory cell."

Technological Evolution Level 0: First appearance
With the advent of working memory, the current combination of brain structures allows the construction and execution of extremely complex action sequences. This includes several unique new abilities. It is possible for individuals to make physical artifacts ("tools") from materials in the environment, enhancing the effectiveness of the body in various ways: extended reach, impact force, and leverage. Simultaneously, it provides an extension to the brain itself: the "tool-enhanced brain" has an extended long-term memory because it can record information in the environment rather than relying on brain-based memory. This greatly enhances the accuracy of cross-generational information transfer, which was first enabled by the onset of language. The accumulation of knowledge concerning advanced tool production results in a new intelligent process: technological evolution. The goal of this new evolutionary process is to produce artifacts that are the most stable in the space of the parent process's goals (i.e. the goals of the human brain), i.e. tools that provide the most benefit to humans.

Technological Evolution Level 1: Computing machines
The continual generation of new knowledge (driven by information-based rewards, collected/recorded/analyzed/organized with physical tools, and shared across multiple generations) enables the creation of increasingly complex physical artifacts. These artifacts are increasingly helpful to humans in achieving their goals (eating, socializing, reproducing, acquiring information, etc.), which support the goals of genetic evolution (proliferation of the most stable genes), which is confined by the simple goal of universal stability-based evolution. The evolution of technology operates at a scale much faster than genetic evolution, so it produces the equivalent of the next addition to the brain before genetic evolution has a chance. This product, the computing machine, is an extension to the most advanced area of the human brain, the prefrontal cortex. It allows the execution of arbitrary algorithms and simulations much more quickly than the prefrontal cortex itself, enabling humans to solve all kinds of symbolic problems more quickly and effectively.

Technological Evolution Level 2: Intelligent computing machines
Technological evolution continues to produce increasingly useful artifacts until a milestone is reached: an artifact with the same degree of intelligence and autonomy as the human, i.e. a human-level artificial intelligence. This artifact boosts the ability of humans to achieve their goals in an exponential way: machines continually design and build the next generation of machines, each better/faster/cheaper than the last. The artifact itself represents the next child intelligent process with goals defined by the parent intelligence (technological evolution), which could include anything that helps (or at least does not harm) its parent process in achieving its goals.

What's Next?
I won't try to speculate here about specifics, but it is expected that, barring some major catastrophe, the same overall process continues: intelligent processes tend to produce other intelligent processes which help achieve the goals of the parent process (or at least don't contradict them).


The following is an abridged lineage of the intelligent processes described here, starting from the oldest ancestor:

1. Physical Evolution (goals: physical stability)
2. Genetic Evolution (goals: proliferation of genes)
3. Brains (goals: eat/sleep/avoid pain/socialize/reproduce/acquire information/etc.)
4. Technological Evolution (goals: help humans achieve their goals)
5. Intelligent Computing Machines (goals: arbitrarily defined by tech evolution)

(Not listed here are all kinds of cultural evolution, including language, music, the free market, etc. Each of these represents a separate branch of the intelligence tree, which, like the others, must not violate the goals of the parent intelligent process.)