I'm getting to the point in my research where I need to begin testing the larger, integrated intelligent system. Enough of the brain-based components have been implemented and tested in isolation that it's time to hook things together and apply them to some initial tasks. I will probably start without all the components just to make it easier to debug problems at first. For example, I can ignore the cerebellum, prefrontal cortex, and possibly the hippocampus because they aren't totally necessary to solve very simple motor control tasks. The sensory cortex, motor cortex, and basal ganglia are necessary, though: at a bare minimum, you must be able to represent perceptions (sensory cortex), represent actions (motor cortex), and be able to choose which actions to perform based on perceptions and a learned value function (basal ganglia).
For an initial test environment, I plan to create a physically simulated human situated in a simulated room. (I've used this type of setup several times before, so it shouldn't take too long to make.) The simulated human will have tactile, visual, and vestibular sensory inputs, and it will have direct control over various muscles which move its limbs. The first task will be something like learning to roll over from its back to its stomach. Later testing could include large motor tasks like standing up and walking around. At first the room will contain only the human, but eventually I would like to add simulated toys in order to provide new learning opportunities.
So I'm excited to see how this goes. I have a lot of ideas of where to go next, but first I need to work out all the bugs on some basic motor control tasks.