In October I wrote about developing a simulated human test application. I have yet to begin implementing this because I'm still working out some issues with the basal ganglia component, which performs the essential task of action selection. It seems to me that action selection is the core required function of any intelligent system (see the wikipedia entry here), so it must work correctly.
I have written several test applications that focus squarely on learning proper action selection, including a 2d arm (1 degree of freedom) that must learn to reach a goal angle, a basic n-armed bandit test, and a 2d mobile robot that must seek reward objects. Once I'm confident that my system can solve these tasks robustly, I will continue down the path towards a simulated human test. Although I really want to start experimenting with a human-like test platform now, I have to take things one step at a time... If I just throw together an integrated control system without first testing each part in isolation, it will be impossible to debug any problems that arise.
2 comments:
I assume that the action selection will be somewhat narrow, without actions like "Revolt against humanity? Y/N".
Can't wait to see you press the "GO" button on the whole system.
Ha ha...
Actions = {"Revolt against humanity", "Create human battery farms", "Bake a cake"}
No, I don't plan to hard-code any actions, especially destructive ones. :) The set of available actions will develop from experience... starting from very minimal actions (e.g., individual desired joint angles) with the ability to construct more complex action sequences (gesture 1, gesture 2, gesture 3...).
The real moral issue, in my opinion, is whether an intelligent system is given the *motivation* to, for example, revolt against humanity.
Post a Comment