At the GDC last week I met Leslie Ikemoto at a tutorial on embodied agents in video games. She brought up the Nero game during the tutorial, which uses evolved neural networks (specifically, the NEAT algorithm, invented by Ken Stanley) to control soldiers in a real-time strategy game. Since I had used NEAT before and am familiar with Ken Stanley and NEAT, I talked to her briefly during a break.
The next day we talked again about our own research. She showed me a project where she trained an animated character to run around on a platform, seeking goals and avoiding barriers, using reinforcement learning. The character's trajectory was determined by the animation sequence, so the RL part learned to choose the best animation to perform at any given state. Overall, it worked pretty well. She was a little unhappy with the results when the character seemed to wander around idly, even though there was a clear path to the goal. My best guess was that the state space needed higher resolution. I also suggested using a clustering algorithm to group the sensors into the most important parts of the state space.
It was very cool to see RL applied to character animation like this. I think it is a very powerful approach, especially in situations where the character must adapt to changing environments. Hopefully we'll see more of this in the future.
Leslie's colleague, Okan Arikan, whom I also met at the GDC, has done some pretty cool computer graphics research. Check out these explosions!