Tuesday, October 23, 2007

Video: What We Still Don't Know

This 48 minute video entitled "What We Still Don't Know" discusses some of the fundamental questions regarding the nature of the universe. It progresses through the following questions:

1. Was the universe designed by an intelligent entity, or did it come about through random interactions constrained by fundamental physical laws?
2. Why do the fundamental physical constants appear to be "tuned" precisely to support life?
3. Are there many universes (a "multiverse")? If so, maybe they all have different physical constants, and ours is just one of the few that supports life.
4. (Coming full circle...) In the same way that we can simulate life-filled universes in computers, is it possible that some superintelligent entity may have created our universe (perhaps with similar motivations to our own)?

Conway's game of life is used as an enlightening example of complexity arising from a simulated universe based on simple rules. Towards the end it includes some commentary by Nick Bostrom, one of my favorite philosophers.

Thursday, October 18, 2007

Simulated Human Test Application Plans

I'm getting to the point in my research where I need to begin testing the larger, integrated intelligent system. Enough of the brain-based components have been implemented and tested in isolation that it's time to hook things together and apply them to some initial tasks. I will probably start without all the components just to make it easier to debug problems at first. For example, I can ignore the cerebellum, prefrontal cortex, and possibly the hippocampus because they aren't totally necessary to solve very simple motor control tasks. The sensory cortex, motor cortex, and basal ganglia are necessary, though: at a bare minimum, you must be able to represent perceptions (sensory cortex), represent actions (motor cortex), and be able to choose which actions to perform based on perceptions and a learned value function (basal ganglia).

For an initial test environment, I plan to create a physically simulated human situated in a simulated room. (I've used this type of setup several times before, so it shouldn't take too long to make.) The simulated human will have tactile, visual, and vestibular sensory inputs, and it will have direct control over various muscles which move its limbs. The first task will be something like learning to roll over from its back to its stomach. Later testing could include large motor tasks like standing up and walking around. At first the room will contain only the human, but eventually I would like to add simulated toys in order to provide new learning opportunities.

So I'm excited to see how this goes. I have a lot of ideas of where to go next, but first I need to work out all the bugs on some basic motor control tasks.