Friday, November 30, 2007

NIPS 2006 Talk, "Bayesian Models of Human Learning and Inference"

This NIPS 2006 talk by Josh Tenenbaum, entitled "Bayesian Models of Human Learning and Inference," is a good overview of how Bayesian techniques are being used to model human intelligence. The NIPS page includes an abstract of the talk, the powerpoint presentation (170 slides), and a video of the talk (about 2 hrs long). (It looks like the large 900x600 video doesn't include sound, so don't download that one.)

This talk was interesting to me because I am using Bayesian networks (non-parametric, hierarchical) for sensory and motor representations. It's good to see that people are seriously considering Bayesian methods as realistic models of human intelligence.

Tuesday, November 13, 2007

Somatosensory Topographic Map Formation, Part 2

(See Somatosensory Topographic Map Formation, Part 1 for more details.)

I ran some more tests, this time training topographic maps on a set of 3D models, including a hand, a human head, a wooden doll, a sword, and a full human body. Each image below is a sequence captured during the training process for each model. One interesting detail is that the maps learn to devote more resources to the regions that are sampled most often, which correspond to the parts of the models with lots of vertices. For example, the human head model has a lot of vertices in the mouth cavity, so it becomes well-represented in the topographic map.






This is the basic idea behind the homunculus representation of the sensory and motor cortices: the brain regions that represent the body learn to devote more real estate to those areas that need high resolution representations. Check out this image from the wikipedia article:

Monday, November 12, 2007

Burlington Hawkeye 52 Faces Article

The Hawkeye newspaper, published in Burlington, IA (my hometown) does a special section called 52 Faces every Sunday. Each week focuses on a different person, usually a local resident in the Burlington area. My mother nominated me for this week's edition. The article talks about my research on brain-inspired artificial intelligence and a little about my polyphasic sleep experience.

Here is the article:

Friday, November 02, 2007

Somatosensory Topographic Map Formation

In order to give my simulated human touch sensors, I need a representation of its relatively complex body surface transformed into a simple 2D array. The hard part is that the transformation must maintain topographic relationships (i.e. points near each other on the body surface should be near each other in the final 2D array). Fortunately, I already have a tool to do this: my topographic map code.

The following images show a topographic map learning to cover a simple body surface. In this case the body is just a human torso and arms. It starts out as a flat 2D sheet and learns to wrap itself around the body. After enough training it will cover the entire body. Where does the training data come from? I simulate random data points from the body surface... sort of like having someone constantly poke it in different places. The topographic map responds to these data points by moving part of itself closer to them. One cool thing about my method is that it automatically learns to represent the most-touched regions with higher fidelity; it's like how our brains use more real estate to represent our hands and faces. (In this example, I'm sampling the body surface uniformly, so the effect doesn't show up in the images.)

One way to think of this is a flat sheet of brain (somatosensory cortex) stretching itself in sensory space to represent the entire body surface.





Tuesday, October 23, 2007

Video: What We Still Don't Know

This 48 minute video entitled "What We Still Don't Know" discusses some of the fundamental questions regarding the nature of the universe. It progresses through the following questions:

1. Was the universe designed by an intelligent entity, or did it come about through random interactions constrained by fundamental physical laws?
2. Why do the fundamental physical constants appear to be "tuned" precisely to support life?
3. Are there many universes (a "multiverse")? If so, maybe they all have different physical constants, and ours is just one of the few that supports life.
4. (Coming full circle...) In the same way that we can simulate life-filled universes in computers, is it possible that some superintelligent entity may have created our universe (perhaps with similar motivations to our own)?

Conway's game of life is used as an enlightening example of complexity arising from a simulated universe based on simple rules. Towards the end it includes some commentary by Nick Bostrom, one of my favorite philosophers.



Thursday, October 18, 2007

Simulated Human Test Application Plans

I'm getting to the point in my research where I need to begin testing the larger, integrated intelligent system. Enough of the brain-based components have been implemented and tested in isolation that it's time to hook things together and apply them to some initial tasks. I will probably start without all the components just to make it easier to debug problems at first. For example, I can ignore the cerebellum, prefrontal cortex, and possibly the hippocampus because they aren't totally necessary to solve very simple motor control tasks. The sensory cortex, motor cortex, and basal ganglia are necessary, though: at a bare minimum, you must be able to represent perceptions (sensory cortex), represent actions (motor cortex), and be able to choose which actions to perform based on perceptions and a learned value function (basal ganglia).

For an initial test environment, I plan to create a physically simulated human situated in a simulated room. (I've used this type of setup several times before, so it shouldn't take too long to make.) The simulated human will have tactile, visual, and vestibular sensory inputs, and it will have direct control over various muscles which move its limbs. The first task will be something like learning to roll over from its back to its stomach. Later testing could include large motor tasks like standing up and walking around. At first the room will contain only the human, but eventually I would like to add simulated toys in order to provide new learning opportunities.

So I'm excited to see how this goes. I have a lot of ideas of where to go next, but first I need to work out all the bugs on some basic motor control tasks.

Wednesday, September 12, 2007

Updated Website

I recently overhauled my website. Now instead of using ugly pages created in Word, I'm using a Tiddlywiki-based system. I think it turned out pretty well. I spent a lot of time polishing up old projects to make them presentable online. This includes lots of new content, like screenshots, videos, and executable downloads (e.g., win32 executables for Opal Playpen and the Curious Robot Playground). Here are some highlights of the new stuff:









Singularity Summit 2007



I attended the Singularity Summit 2007 this past weekend in San Francisco. Here's a good summary of the sessions. While I was there, I got a chance to speak with Steve Omohundro, Peter Norvig, Barney Pell, and Sam Adams.

The X-Prize Foundation announced their plans to enter the education space. They want to provide an Education X-Prize, and they were looking for suggestions from the audience at the summit on how best to frame the problem.

Here are some thoughts:

There is no general consensus as to when a singularity will happen, if it is likely to be beneficial or destructive to human life, or if one will happen at all. There isn't really an agreed-upon definition of what a singularity is. But that's the point of this summit: to define what we're looking for and how to influence it positively. And to raise awareness of a potentially huge issue. I think Ray Kurzweil's writing (like The Singularity is Near) continues to be a good reference, especially for people just joining the singularity discussion.

There seems to be a fair amount of venture capital available for startups in the artificial general intelligence (AGI) space. (Several investment firms were represented at the summit, including Clarium Capital, the John Templeton Foundation, and Draper Fisher Jurvetson.) Also, there are several stealth-mode startups working on AGI technology and products.

Monday, August 27, 2007

Festo's Fluidic Muscles

Check out this very organic-looking artificial arm. There's a good video of it on that site. It uses fluidic muscles, an amazing product from the company Festo. Also check out some other biologically inspired projects here.

Wednesday, August 01, 2007

Concerning "Stuff"

Stuff used to be more difficult and expensive to manufacture, so people would accumulate all they could. Basically, stuff used to be more valuable. Over the past century, stuff has become so mass-produced that it doesn't cost much to fill a house with it. Unfortunately, people haven't adjusted their values accordingly, so they actually do fill houses with it, usually to their own detriment.

As usual, Paul Graham has hit the nail on the head, this time regarding "stuff" - why we overvalue material possessions, collect too many of them, and let them exhaust us mentally:
...unless you're extremely organized, a house full of stuff can be very depressing. A cluttered room saps one's spirits. One reason, obviously, is that there's less room for people in a room full of stuff. But there's more going on than that. I think humans constantly scan their environment to build a mental model of what's around them. And the harder a scene is to parse, the less energy you have left for conscious thoughts. A cluttered room is literally exhausting.

Saturday, July 21, 2007

Research Update

It has been a long time since I last posted about my research. I have been avoiding putting too much online until it's ready (a different approach than my master's research, which was an open source software project). I still have a lot of work ahead, but things having been progressing very well. I seem to be finding answers to all of my big questions at a fairly constant rate.

Again, my goal is to build an intelligent machine based on the structure and computational functions of the mammalian brain. My general approach is to implement models of what I consider the core computational structures of the brain: the posterior cerebral cortex, the motor cortex, the prefrontal cortex, the hippocampus, the basal ganglia, and the cerebellum. I have been studying each of these elements at a time, attempting to extract its core function and purpose within the brain as a whole. For each one I am going through a series of phases: studying biological evidence, developing a computational model, implementing the model in software, and testing the implementation. Once I complete these phases for each component, I will begin a series of whole-system tests.

Testing these models has become a pretty involved process. You can't debug a hippocampus model implementation in a debugger like most other software; there are just too many variables to watch at once. Instead, I have to design a custom testing program for each component. For the case of the hippocampus model, I made an electronic music program where you can play notes on the computer keyboard. This lets you present songs to the hippocampus, allowing it to learn temporal patterns, and later recall songs from small pieces of the same songs. The end result is that by using the medium of sound/music, I can study the system's capabilities much easier than by watching huge arrays of variables in a standard debugger.

The following is a summary my current status. Obviously it's impossible to quantify these things exactly, but this is my best estimate.

Posterior Cortex
Model: 90%
Implementation: 90%
Testing: 90%

Motor Cortex
Model: 50%
Implementation: 90%
Testing: 60%

Prefrontal Cortex
Model: 50%
Implementation: 90%
Testing: 60%

Hippocampus
Model: 95%
Implementation: 95%
Testing: 95%

Basal Ganglia
Model: 90%
Implementation: 90%
Testing: 50%

Cerebellum
Model: 50%
Implementation: 20%
Testing: 0%

Friday, May 18, 2007

Modified Delta Learning Rule Part 2

Another modification to my delta learning rule is motivated by my preference for working with real time values (t=0.13s, t=0.16s, t=0.19s, ...) as opposed to abstract discrete time values (t=1, t=2, t=3, ...). This distinction is important for interactive real time applications, like robotics and video games.

I want to be able to specify how much error is reduced per second (e.g., an error reduction rate of 0.3 means that the error is reduced by 30% per second). Here's how I do it:

error_reduction_rate = 1 - exp(-step_size / error_reduction_tc)

"step_size" is the amount of real elapsed time between weight updates (it's best if this is a fixed constant, both for stable learning and to avoid recomputing the error reduction rate constantly). "error_reduction_tc" is a time constant which determines how fast errors are reduced.

For example, an error reduction time constant of 2.0 means that the error will be reduced to about 37% of its original value after 2.0 seconds of updates. If weight updates occur every 0.05 seconds, this yields an error reduction rate of 0.02469. If we change the time constant to 0.1, leaving the step size at 0.05, the error reduction rate jumps up to 0.3935 since the time constant is much faster. Note: it's important to keep the step size smaller than the time constant; otherwise, things can get unstable.

A Delta Learning Rule with a Meaningful Learning Rate

Recently I was looking for a way to train a simple linear neural network using the delta rule in such a way that I could specify exactly how much error is reduced on each weight update. The usual delta learning rule for a single linear neuron is:

delta_w_i = learning_rate * error * x_i

In my opinion, the learning rate parameter lacks an intuitive interpretation. Ideally it would represent exactly how much error is reduced per update, but it doesn't. It is usually just set to some small constant, like 0.01, which is hopefully not too small (slow convergence) or too large (instability). My goal was to find a modified learning rule with a learning rate-like parameter which could reduce the error by a specific amount on each weight update (e.g., a value of 0.2 would reduce the error by 20% per update).

Temporarily assume a network with only one input/weight. Also assume that the goal is to reduce the error by 100% within a single step. (These assumptions will be relaxed later.) The output function is:

y = w * x

The error is:

error = target - y

In order to achieve zero error in one step, we need the following to be true:

y = target = (w + delta_w) * x

So we need to solve for the learning rate which yields zero error after one step:

delta_w = target / x - w
learning_rate * error * x = target / x - w
learning_rate = (target / x - w) / (error * x)
= [(target / x - w) / (error * x)] * (x / x)
= (target - w * x) / (error * x^2)
= (target - w * x) / ((target - w * x) * x^2)
= 1 / (x^2)

So a learning rate of 1 / (x * x) will reduce the error in the system to zero after one step. We can easily make this work for multiple inputs/weights by simply dividing by the size of the input/weight space n:

learning_rate = 1 / (n * x^2)

Of course, we don't usually want to reduce the error completely in one step, so we replace the 1 with an "error reduction rate." This quantity, which is a value within [0, 1], determines the amount of error to be reduced on each weight update. It is a rate with meaningful units (0.01 x error reduction percent per weight update). For example, an error reduction rate of 0.4 will reduce the error by 40% on each update compared to the error value on the previous step. It is equivalent to 1 - error(t+1) / error(t). Now the desired learning rate becomes the following:

learning_rate = error_reduction_rate / (n * x^2)

Now we can modify the delta learning rule by replacing the traditional learning rate:

delta_w_i = error_reduction_rate * error * x_i / (n * x_i * x_i)
= error_reduction_rate * error / (n * x_i)

One problem remains: we can't divide by zero when the input (x_i) values are zero. To remedy this, we only update weights associated with active (non-zero) inputs. Also, instead of dividing by n, we divide by the number of active inputs m:

only for non-zero x_i:
delta_w_i = error_reduction_rate * error / (m * x_i)

This looks very similar to the original delta rule except for three major differences:
  1. The learning rate has been replaced with the error reduction rate, a quantity with a much more meaningful interpretation.
  2. The weight deltas are divided by m, the number of active inputs. This essentially scales the error by the number of weights that contribute to the error.
  3. The weight deltas are divided by their associated input (x_i) values. This is an interesting deviation from the original delta rule which instead multiplies them by their input values.
UPDATE (5/23/07): It turns out that there's a problem with this modified delta rule. My assumptions concerning how the single input/weight case generalize to multiple inputs/weights were incorrect. Fortunately, only a small change is needed. Instead of dividing by the input value squared, we need to divide by the input vector length squared, which serves to normalize the weight deltas by the energy of the input vector:

delta_w_i = error_reduction_rate * error * x_i / length(x)^2

length(x) here is the Euclidean norm of the input vector. If length(x) is zero, we simply avoid modifying any weights, which is valid because all x_i values are zero which would produce weight deltas of zero. We are still able to specify the error reduction rate in an intuitive way, which was the motivation behind my search for a better delta rule.

Interestingly, this turns out to be equivalent to the normalized LMS rule. (LMS, or least-mean-square, is another name for the delta rule). Normalized LMS is described here and in Haykin's Neural Networks book, 2nd edition, p. 152.

Monday, May 14, 2007

Randall C. O'Reilly's Wiki

This is one of the best things I've read in a while. It's a very detailed, personal account of Randall O'Reilly's belief system, covering the brain, epistemology, religion, physics, politics, and much more. I just finished reading the whole thing, which was very refreshing.

I especially enjoyed the constant focus on developing self-consistent beliefs as a practical goal: "Self-consistency is probably the only useful criterion for establishing the "truth" of a belief system. It is itself consistent with the fact that we cannot escape our fundamental subjectivity..." Here's another great quote: "You might argue that this is all circular. It is. However, if you make a big enough circle that encompasses all of experience, then I don't see what the problem is."

I think self-consistency is implemented in the brain as a Bayesian network. New pieces of evidences are constantly coming in through sensors and being passed around among nodes to update their beliefs about the world. The whole belief propagation mechanism is designed to maintain self-consistent beliefs as well as possible; complete self-consistency isn't always achieved, but it's at least maximized. Interestingly, we are still able to bias the system by focusing attention on certain details while ignoring others, which can basically keep certain pieces of knowledge from affecting our belief systems.

Monday, May 07, 2007

Polyphasic Sleeping Log

Back in 2005 I tried a polyphasic sleep schedule (specifically, the Uberman schedule) for almost a month. It was loads of fun after the first two weeks were over. I'd say I have a much different perspective on sleep now. During the process I kept a daily log, which I forgot about until now. I thought I'd post it here...
Related links:
http://www.kuro5hin.org/story/2002/4/15/103358/720
http://www.jonnyhenderson.com/secret/sleeping.php?week=1

General comments:
* Take vitamins daily.
* Keep a healthy diet.
* Drink lots of fluids.
* Learn to fall asleep fast. With only 20 minutes to sleep at a time, you'll need to fall asleep as soon as possible. Be able to make yourself extremely bored when you're lying down; immediately forget any thoughts that come up. This takes practice.
* Don't oversleep, and don't miss naps. Having a rigid sleep schedule helps your body adjust faster.
* I found that during the induction period, it's much harder to stay awake at night. Be especially careful during the nightly naps that you don't oversleep.
* Before the evening starts, have a list of projects written down. It's really hard to think of new things to do later.
* If you start to get overpowered by the urge to sleep, do something different: eat, play a video game (much better than watching a movie since it's interactive), go for a walk (I enjoy walking in the morning at sunrise), etc.. Don't just sit (or lie) there wishing you weren't so tired.
* If you get heartburn, it's probably from eating within 2 hours before sleep. Lying down lets fluids travel up the esophagus easier. (I would think the sphincter muscle down there would keep everything in the stomach, but maybe it's not perfect.) Something that helps is sleeping on an incline. Try putting a few wadded up blankets under the head of your mattress.
* Summary of my induction period:
- Days 1 & 2: Felt like I had missed a lot of sleep.
- Days 3 & 4: Didn't feel terribly sleepy anymore most of the time. Felt generally normal.
- Days 5 & 6: Felt kind of queasy all day. Felt sleepy during the night if I didn't keep busy.
- Days 7 - 10: Felt mostly normal, but still a little sleepy during the night if I didn't keep busy.
- After day 10: Felt mostly normal.

------------------------------------------------------

Day 1: Saturday, 7-23-05
Total sleep: 1:40
* Started at 12 a.m.
* 4 a.m.: I took my first nap. I slept most of the time and woke up easily. After getting up, I didn't feel too much different; I just felt like I was up really late.
* 4:20 a.m. to 6 a.m.: I read a magazine.
* 6:30 a.m. to 7:30 a.m.: I took a walk and listened to the Lord of the Rings BBC radio show.
* 8 a.m. to 8:20 a.m.: I took my second nap. I felt more tired after waking up this time, but I was still able to get up easily.
* 8:20 a.m. to 12 p.m.: I ate breakfast and played video games.
* 12 p.m. to 12:20 p.m.: I took my third nap. I felt rested afterwards, though a little groggy. Still not bad. Once I get up and move around, I feel pretty awake.
* 12:20 p.m. to 4:15 p.m.: Ate lunch. Ran errands around town... but accidentally didn't get home till after 4pm.
* 4:25 p.m. to 4:45 p.m.: Took a nap, but probably only slept for less than half the time. I was really awake from running errands in the 101 degree heat; also, I had just found that I received a scholarship for the next school year, so that made it a little hard to sleep.
* 4:45 p.m. to 8 p.m.: At supper with Ken.
* 8 p.m. to 8:20 p.m.: Took a nap at Ken's place. Didn't sleep very much.
* 8:20 p.m. to 11:30 p.m.: Played video games with Ken until I felt pretty tired.
* 11:30 p.m. to 12 a.m.: Drove home from Ken's. Got ready for a nap.

------------------------------------------------------

Day 2: Sunday, 7-24-05
Total sleep: 4:10
* 12 a.m. to 12:20 a.m. Took a nap.
* 12:20 a.m. to 4 a.m.: When I woke up from my nap, I really wanted to keep sleeping. It took a long time before I started feeling awake. Played computer games and watched TV.
* 4 a.m. to 6:30 a.m.: Tried to take a nap, but accidentally slept way too long.
* 6:30 a.m. to 8 a.m.: Listened to the Lord of the Rings BBC radio show while trying to stay awake.
* 8 a.m. to 8:20 a.m.: Took a nap.
* 8:20 a.m. to 12:15 p.m.: Went to church.
* 12:20 p.m. to 12:40 p.m.: Took a nap.
* 12:40 p.m. to 4 p.m.: Watched TV and listened to the Lord of the Rings BBC radio show.
* 4 p.m. to 4:20 p.m.: Took a nap. I've noticed that it's much easier to get up after a nap during the day than after one at night. Maybe this will change after another week or so.
* 4:20 p.m. to 8 p.m.: Priced computer parts online.
* 8 p.m. to 8:20 p.m.: Took a nap.
* 8:20 p.m. to 12 a.m. Read Uberman sleep schedule info on the web.

------------------------------------------------------

Day 3: Monday, 7-25-05
Total sleep: 2:10
* 12 a.m. to 12:20 a.m. Took a nap.
* 12:20 a.m. to 4 a.m.: Read more Uberman sleep schedule experiences.
* 4 a.m. to 4:10 a.m.: Took a nap. Woke up early because I had to use the bathroom. I didn't want to try to go back to sleep for ten minutes.
* 4:30 a.m. to 5:15 a.m.: Took a walk and listened to the Lord of the Rings BBC radio show.
* 5:15 a.m. to 8 a.m.: Read about C++ threading libraries. Took a shower.
* 8 a.m. to 8:20 a.m.: Took a nap.
* 8:20 a.m. to 12 p.m.: Thesis work.
* 12 p.m. to 12:40 p.m.: Took a nap. I woke up easily, but I didn't actually sit up. Instead, I fell asleep for another 20 minutes.
* 12:40 p.m. - 4 p.m.: Thesis work. I'm glad I can actually do something productive at this point. I felt sort of lousy earlier today, but my 8 a.m. nap really helped. I almost feel normal.
* 4 p.m. to 4:20 p.m.: Took a nap.
* 4:20 p.m. to 8:20 p.m.: Went out for supper and bought groceries. Didn' finish groceries quite on time, though.
* 8:20 p.m. to 8:40 p.m.: Took a nap.
* 8:40 p.m. to 12 a.m.: Played video games, read a book, and read about 'gnuplot'.

------------------------------------------------------

Day 4: Tuesday, 7-26-05
Total sleep: 7:30
* 12 a.m. to 6:30 a.m.: Took a nap. This was really disappointing after doing so well yesterday. I just didn't wake up to my alarm. I decided to skip my 8 a.m. nap and then continue napping like normal.
* 6:30 a.m. to 11:45 a.m.: Ate breakfast. Read literature on threading and OpenMP.
* 11:45 a.m. to 12:05 p.m.: Took a nap.
* 12:05 p.m. to 4 p.m.: Ate lunch. Tried to buy a motherboard at Best Buy, but they didn't have any in stock.
* 4 p.m. to 4:20 p.m.: Took a nap.
* 4:20 p.m. to 8 p.m.: Priced computers online and read about wiki software. Ate supper.
* 8 p.m. to 8:20 p.m.: Took a nap.
* 8:20 p.m. to 12 a.m.: Read more stuff online. Played video games.

------------------------------------------------------



Day 5: Wednesday, 7-27-05
Total sleep: 2:00
* Worked at VRAC throughout the night, which was pretty fun. I felt alert most of the time, but I didn't end up being too productive. Did research throughout the day.

------------------------------------------------------

Day 6: Thursday, 7-28-05
Total sleep: 2:00
* Worked at VRAC throughout the night again. I felt more sleepy this time, though. I must be adjusting still. Worked on research/code throughout the day.

------------------------------------------------------

Day 7: Friday, 7-29-05
Total sleep: 2:00
* Stayed home during the night. It was difficult to stay awake while I wasn't working on the computer. For the past day or so I've felt a weird queasy feeling with stomach cramps. I hope that goes away soon.

------------------------------------------------------

Day 8: Saturday, 7-30-05
Total sleep: 3:20
* Accidentally slept from 12 a.m. to 2:30 a.m. Skipped 4 a.m. nap. Didn't sleep during 12 p.m. nap (on Ken's trampoline outside). Slept 10 minutes during 4 p.m. nap. Still have the weird stomach pains sometimes.

------------------------------------------------------

Day 9: Sunday, 7-31-05
Total sleep: 3:25
* Accidentally slept from 8 a.m. to 9:45 a.m. No other problems.

------------------------------------------------------

Day 10: Monday, 8-1-05
Total sleep: 2:00
* Worked at VRAC 1 a.m. to 6:30 a.m. I still have stomach pains most of the time. I can't tell if it's muscle cramps or heartburn (located just under my breastbone).

------------------------------------------------------

Day 11: Tuesday, 8-2-05
Total sleep: 2:00
* Worked at VRAC 2 a.m. to 7:30 a.m. (replaced the motherboard in my old desktop computer that died about a year ago). I think the stomach pains aren't as bad when I don't eat large meals, so I'll try to spread out my eating more throughout the day.

------------------------------------------------------

Day 12: Wednesday, 8-3-05
Total sleep: 3:00
* Worked at VRAC 2 a.m. to 7:30 a.m. It took a while to wake up after my 12 a.m. nap. I hope that within a few days (the 2 week mark) I won't be as tired during the night. Oops, I accidentally slept from 12 p.m. to 1:20 p.m.

------------------------------------------------------

Day 13: Thursday, 8-4-05
Total sleep: 2:00
* Worked at VRAC 2 a.m. to 7:30 a.m.

------------------------------------------------------

Day 14: Friday, 8-5-05
Total sleep: 3:30
* Stayed home during the morning this time. Overslept from 4 a.m. to 5:50 a.m. Didn't skip my 8 a.m. nap.

------------------------------------------------------

Day 15: Saturday, 8-6-05
Total sleep: 2:00
* Stayed home during the morning. I think that my stomach pains were probably intense heartburn. I read today that eating just before lying down can cause heartburn (because gravity isn't keeping the stomach acid down). So I think I'll try to limit how much I eat within the 2 hours before a nap.

------------------------------------------------------

Day 16: Sunday, 8-7-05
Total sleep: 3:10
* Overslept from 8:20 a.m. to 9:50 a.m.

------------------------------------------------------

Day 17: Monday, 8-8-05
Total sleep: 3:30
* Overslept from 8:20 a.m. to 10:10 a.m.

------------------------------------------------------

Day 18: Tuesday, 8-9-05
Total sleep: 2:00
* Took my 8 p.m. nap at 9 p.m. since we were in Des Moines at that time. I've noticed something weird over the past 3-4 days: periodically I have the sensation of something pressing against my forehead. It feels like someone's fingers. Also, the more I focus on it, the stronger the feeling. If I think about something else, the feeling goes away. It also goes away if I briefly touch my forehead.

------------------------------------------------------

Day 19: Wednesday, 8-10-05
Total sleep: 7:30
* Overslept from 12 a.m. to 6:30 a.m. Wow. I don't know how that happened. I did feel really rested when I woke up, even if I was overwhelmed with guilt. I'm going to skip my 8 a.m. nap and start again at noon.

------------------------------------------------------

Day 20: Thursday, 8-11-05
Total sleep: 2:00
* Today I didn't feel too much different than normal, which was surprising after sleeping so long yesterday. I got a new computer yesterday and spent the whole day today installing software.

------------------------------------------------------

Day 21: Friday, 8-12-05
Total sleep: 2:00
* Went camping Friday through Sunday.

------------------------------------------------------

Day 22: Saturday, 8-13-05
Total sleep: 6:00
* Fell asleep multiple times during the morning.

------------------------------------------------------

Day 23: Sunday, 8-14-05
Total sleep: 3:10
* Overslept 8 a.m. to 9:30 a.m.

------------------------------------------------------

Day 24: Monday, 8-15-05
Total sleep: 2:50
* Overslept 4 p.m. to 5:10 p.m. The sensations on my forehead are pretty constant now. I notice them more when I lie down for a nap.

------------------------------------------------------

Day 25: Tuesday, 8-16-05
Total sleep: 2:30
* Overslept 12 p.m. to 12:50 p.m.

------------------------------------------------------

Day 26: Wednesday, 8-17-05
Total sleep: N/A (I am no longer doing polyphasic sleep.)
* Overslept 12 a.m. to 2:20 a.m.

I'm making the conscious decision (not affected by lack of sleep, as far as I can tell) to go back to monophasic sleeping. My reasons are the following:

- So far I haven't felt up to my normal mental capacity. This is fine in most situations, but not when doing research. I never succeeded in going a full week without oversleeping, though. Maybe then I would have felt normal.
- Of course, there are still unknown health risks as this hasn't been studied in people for long periods of time (i.e. years).
- Even though I can now mostly avoid heartburn, I still feel a weird lump in my chest underneath my breastbone. I feel it when I lean forward. I have no idea what this is, but it has only developed since I started polyphasic sleep.
- I feel like I've gained a lot of weight since I started. I would guess about 10 pounds (started at 182, now up to about 192). This is strange to me since I thought staying up around the clock would help me burn more calories. I even started exercising (using elliptical machines) for 30 minutes a day, 3 times a week, over the past two weeks. Maybe sleep stages 3 and 4 (which I'm not getting) are involved in regulating metabolism.
- My wife doesn't like sleeping in an empty bed most of the night.
- If I am going to stop, now is a good time since school is starting in a few days (on Monday).

That being said, I'm going to miss polyphasic sleeping. It was great having so much free time that I actively had to think of things to do. Maybe I'll try it again someday. I think it'll be easier to start next time since I know what to expect.

------------------------------------------------------

Day 27: Thursday, 8-17-05
Total sleep: 12 hours! (This includes part of yesterday: 8 p.m. to 8 a.m.)
* I feel good today. I can't quite explain it, but I just feel generally good... and sad. I'm a little groggy after having slept for 12 hours, but that should only last a day or two. The weird lump in my chest is gone. I can't help feeling like I failed, but I don't think that's the right attitude. At least I got a month of experience with polyphasic sleep.

Monday, April 30, 2007

Sumotori Dreams


This is amazing. It's a physically simulated sumo wrestling game. You provide simple high-level control signals (walk forward, push, etc.) in order to defeat your opponent, and the wrestlers try to do those actions while maintaining their balance. It's as if they have their own reflexive control systems to stand up straight without falling over, and it works fairly well. It's loads of fun to watch and play. And it's incredibly small... collision detection, physics (with shattering blocks, even), sound effects, music, graphics code (with shadows), textures, etc. all fits in an 87k executable.

HCI Forum/Emerging Technology Conference 2007

My graduate program just had our annual open house event last week (ETC 2007). Speakers included Don Norman, Neal Stephenson, Guy Kawasaki, and Raghu Ramakrishnan.

We unveiled the newly-upgraded C6, a 6-sided VR cave display with 100 million pixels of total resolution (4096 x 4096 per wall)... double that with stereo enabled. For this event we've been working on a tropical island demo. Here's a picture of Tom Batkiewicz running the app:

Monday, April 16, 2007

Do Schools Kill Creativity?

This is a great talk by Sir Ken Robinson on how we are educated out of creativity. He says we are taught to be afraid to make mistakes, which I think is very true and very frustrating.

"Our education system has mined our minds in the way that we've strip-mine the earth for a particular commodity."


Why I Love The Matrix

I love The Matrix, and it's not because of the action scenes or special effects. (If those were the movie's best attributes, I probably would have watched it only a couple of times.) I love The Matrix because it represents very advanced forms of three crucial ideas: virtual reality, artificial intelligence, and education.

Wednesday, March 28, 2007

Bayesian Model of Curiosity

As I continue to implement and experiment with a context representation based on Bayesian inference, I have considered various ways to implement curiosity. The main component I need is a prediction error based on the difference between the system's predictions and reality. Given the tools available in the Bayesian framework, the most obvious method would be to use the prior and posterior distribution. The prior represents the systems predictions, and the posterior represents reality (approximately). So the prediction error would be the difference (possible KL divergence) between the prior and posterior distributions. (One implication is that if the incoming evidence (likelihood) contains no new information, the prior and posterior will be equal, resulting in zero prediction error.) The prediction error can then be used to generate a curiosity reward, either for "surprising" events or for those that yield high learning progress (reduction in surprise over time).

After I went through all the trouble of figuring this out, I came across this mathematical model of surprise based on Bayesian statistics (in which surprise is measured in units of "wow").

Thursday, March 22, 2007

QuickProf 0.9.0 Released

Just a quick note to say that I just released version 0.9.0 of QuickProf, a very small (one header file) cpu profiling API for C++ applications on all platforms. It's easy to add it to your projects, and the timing measurements it provides are very useful for optimizing performance. For example, it can give you an overall timing summary when your app finishes, or it can generate an output file for graphing your app's runtime performance.

For example, I use QuickProf to profile the major components of my context representation code (inference, propagation, and adaptation). I have it print out an overall summary at the end, which usually gives me something like this:

adaptation: 23.2914 %
inference: 11.5562 %
propagation: 0.200819 %

...and I have it output a timing data file, which I then plot with the Python library Matplotlib:

Monday, March 12, 2007

The Universe as a Computer Simulation, Quantum Mechanics

I had a thought last week. Say you wanted to build a simulated universe, and your simulation would contain intelligent beings. To conserve computational resources, you would want to avoid simulating aspects of the universe that are not observed by its inhabitants. It would be like a colossal view frustum culling process. For example, events that are very small or very far away from an observer might not need to be computed.

If our universe is a computer simulation, maybe that's why we see quantum mechanical effects. The detailed positions of individual particles remain in indeterminant states until they are observed, at which point their wave functions collapse (i.e. their states are computed in more detail).

Interestingly, this entire thought process assumes that the observer has a very important role in the simulation, as if the universe were designed for intelligent beings.

GDC 2007

I just got back from the Game Developers Conference 2007 in San Francisco. I hung out with Ken Kopecky, Andres Reinot, and Josh Larson most of the week.

Here is a very terse summary of each day at the conference:

Monday
I attended Creativity Boot Camp, a full-day tutorial by Paul Schuytema. The morning was more lecture style, covering a set of practical tips on how to live a lifestyle that fosters creativity. In the afternoon the audience formed groups to work on several exercises. One was a problem solving task (deciding which objects to bring along when trying to escape a mall full of zombies). Another was a board game design task based on a random set of pieces, randomizers, and board layouts. Alexey Pajitnov happened to be there observing the groups as they worked.

Tuesday
I attended the day-long Independent Games Summit, which included sessions on prototyping, bootstrapping an indie game dev project, retail distribution, indie development logistics, the casual game market (presented by Eric Zimmerman of gamelab), the story behind Cloud, marketing for indie games, physics games (presented by Matthew Wegner, creator of Fun-Motion), and a panel discussion on the future of indie games.

Wednesday
At the Sony keynote they presented a "home" feature for the PS3. It looks like a SecondLife kind of thing with much better graphics. They also introduced an great-looking game called LittleBigPlanet. I went to a round table discussion on non-profit games, moderated by Martin de Ronde of OneBigGame. I also went to a round table on security and privacy in games, which touched on the legal and marketing issues related to storing user data. Finally, in the evening we went to the Independent Games Festival & awards ceremony and Game Developers Choice awards.

Thursday
Shigeru Miyamoto gave the Nintendo keynote, discussing his vision and how it relates to that of Nintendo. One interesting point was that his goal for the original Zelda for NES was to introduce a new form of communication: instead of playing the game in isolation, gamers were forced to talk to each other about how to solve puzzles. Miyamoto also showed a video of Super Mario Galaxy.

At the annual Game Design Challenge, three designers were presented with the task of creating a video game using a needle and thread interface. Alexey Pajitnov won. I then attended a game design talk by Clint Hocking entitled Exploration: From Systems to Spaces to Self. The talk was based on the idea that humans have a need to explore things and that every game is an exploration game. "Exploration" was defined in several ways, including system exploration (the general idea of exploring the properties of some system, like the mechanics of a video game), spatial exploration (the special case where the system in question is defined spatially, e.g., a physics simulation). A third type, rarely seen in video games, is self-exploration, which refers to the process of putting ourselves in situations that lead to us learning something about ourselves.

Chris Hecker gave a talk on "how to animate a character you've never seen before," referring to the challenge of animating the user-designed creatures in SPORE. In the evening we went to the Programmer's Challenge, a game show with six of the industry's leading programmers.

Friday
Tom Leonard from Valve gave a talk on making the Source engine scalable to multiple cores. It sounds like they built a custom framework to support several different concurrency models (course-grained and fine-grained parallelism) for different situations. Chaim Gingold gave a talk entitled SPORE's Magic Crayons, which was about limiting the parameters available for user-generated content in order to push the space of probable designs into the space of desirable designs. In other words, for SPORE it was important to avoid giving users knobs that tend to yield ugly results.

We went to the Video Games Live concert in the evening, which included performances by Koji Kondo and Martin Leung (aka the Video Game Pianist).

Wednesday, January 24, 2007

Planning, a.k.a. Time Travel in the Brain

The January 29th issues of Time is focused on the brain (The Brain: A User's Guide). One interesting article is Time Travel in the Brain. It explains mental "time travel" as our ability to remember past situations and to simulate future scenarios, jumping through time at arbitrary speeds.

The purpose of this function, as explained in the article, is basically the same as that of planning in reinforcement learning:
Moving around in the world exposes organisms to danger, so as a rule they should have as few experiences as possible and learn as much from each as they can. Although some of life's lessons are learned in the moment ("Don't touch a hot stove"), others become apparent only after the fact ("Now I see why she was upset. I should have said something about her new dress"). Time travel allows us to pay for an experience once and then have it again and again at no additional charge, learning new lessons with each repetition.

Concerning how often we switch into "planning mode":
Perhaps the most startling fact [... is ...] how often it does it. Neuroscientists refer to it as the brain's default mode, which is to say that we spend more of our time away from the present than in it.

I agree. And that's how agents in the Verve library are designed to work when in planning mode: for each "real" time step, the agent simulates many time steps (default=50, if I remember correctly) through the use of its learned predictive model.