Friday, November 02, 2007

Somatosensory Topographic Map Formation

In order to give my simulated human touch sensors, I need a representation of its relatively complex body surface transformed into a simple 2D array. The hard part is that the transformation must maintain topographic relationships (i.e. points near each other on the body surface should be near each other in the final 2D array). Fortunately, I already have a tool to do this: my topographic map code.

The following images show a topographic map learning to cover a simple body surface. In this case the body is just a human torso and arms. It starts out as a flat 2D sheet and learns to wrap itself around the body. After enough training it will cover the entire body. Where does the training data come from? I simulate random data points from the body surface... sort of like having someone constantly poke it in different places. The topographic map responds to these data points by moving part of itself closer to them. One cool thing about my method is that it automatically learns to represent the most-touched regions with higher fidelity; it's like how our brains use more real estate to represent our hands and faces. (In this example, I'm sampling the body surface uniformly, so the effect doesn't show up in the images.)

One way to think of this is a flat sheet of brain (somatosensory cortex) stretching itself in sensory space to represent the entire body surface.





1 comment:

Johnnyburn said...

I love when "poking" is one of the steps in an experiment.

Rock on, Tyler.
-JB