This NIPS 2006 talk by Josh Tenenbaum, entitled "Bayesian Models of Human Learning and Inference," is a good overview of how Bayesian techniques are being used to model human intelligence. The NIPS page includes an abstract of the talk, the powerpoint presentation (170 slides), and a video of the talk (about 2 hrs long). (It looks like the large 900x600 video doesn't include sound, so don't download that one.)
This talk was interesting to me because I am using Bayesian networks (non-parametric, hierarchical) for sensory and motor representations. It's good to see that people are seriously considering Bayesian methods as realistic models of human intelligence.
Friday, November 30, 2007
Tuesday, November 13, 2007
Somatosensory Topographic Map Formation, Part 2
(See Somatosensory Topographic Map Formation, Part 1 for more details.)
I ran some more tests, this time training topographic maps on a set of 3D models, including a hand, a human head, a wooden doll, a sword, and a full human body. Each image below is a sequence captured during the training process for each model. One interesting detail is that the maps learn to devote more resources to the regions that are sampled most often, which correspond to the parts of the models with lots of vertices. For example, the human head model has a lot of vertices in the mouth cavity, so it becomes well-represented in the topographic map.
This is the basic idea behind the homunculus representation of the sensory and motor cortices: the brain regions that represent the body learn to devote more real estate to those areas that need high resolution representations. Check out this image from the wikipedia article:
I ran some more tests, this time training topographic maps on a set of 3D models, including a hand, a human head, a wooden doll, a sword, and a full human body. Each image below is a sequence captured during the training process for each model. One interesting detail is that the maps learn to devote more resources to the regions that are sampled most often, which correspond to the parts of the models with lots of vertices. For example, the human head model has a lot of vertices in the mouth cavity, so it becomes well-represented in the topographic map.
This is the basic idea behind the homunculus representation of the sensory and motor cortices: the brain regions that represent the body learn to devote more real estate to those areas that need high resolution representations. Check out this image from the wikipedia article:
Monday, November 12, 2007
Burlington Hawkeye 52 Faces Article
The Hawkeye newspaper, published in Burlington, IA (my hometown) does a special section called 52 Faces every Sunday. Each week focuses on a different person, usually a local resident in the Burlington area. My mother nominated me for this week's edition. The article talks about my research on brain-inspired artificial intelligence and a little about my polyphasic sleep experience.
Here is the article:
Here is the article:
Friday, November 02, 2007
Somatosensory Topographic Map Formation
In order to give my simulated human touch sensors, I need a representation of its relatively complex body surface transformed into a simple 2D array. The hard part is that the transformation must maintain topographic relationships (i.e. points near each other on the body surface should be near each other in the final 2D array). Fortunately, I already have a tool to do this: my topographic map code.
The following images show a topographic map learning to cover a simple body surface. In this case the body is just a human torso and arms. It starts out as a flat 2D sheet and learns to wrap itself around the body. After enough training it will cover the entire body. Where does the training data come from? I simulate random data points from the body surface... sort of like having someone constantly poke it in different places. The topographic map responds to these data points by moving part of itself closer to them. One cool thing about my method is that it automatically learns to represent the most-touched regions with higher fidelity; it's like how our brains use more real estate to represent our hands and faces. (In this example, I'm sampling the body surface uniformly, so the effect doesn't show up in the images.)
One way to think of this is a flat sheet of brain (somatosensory cortex) stretching itself in sensory space to represent the entire body surface.
The following images show a topographic map learning to cover a simple body surface. In this case the body is just a human torso and arms. It starts out as a flat 2D sheet and learns to wrap itself around the body. After enough training it will cover the entire body. Where does the training data come from? I simulate random data points from the body surface... sort of like having someone constantly poke it in different places. The topographic map responds to these data points by moving part of itself closer to them. One cool thing about my method is that it automatically learns to represent the most-touched regions with higher fidelity; it's like how our brains use more real estate to represent our hands and faces. (In this example, I'm sampling the body surface uniformly, so the effect doesn't show up in the images.)
One way to think of this is a flat sheet of brain (somatosensory cortex) stretching itself in sensory space to represent the entire body surface.
Subscribe to:
Posts (Atom)