I attended the 2nd AGI conference a few weeks back. The goal of the conference is to help organize the field of artificial general intelligence (AGI).
Juergen Schmidhuber (pictured above) gave the keynote, entitled "The New AI." I was especially interested in the part of his talk on artificial curiosity. Marcus Hutter gave two good paper presentations, "Feature Markov Decision Processes" and "Feature Dynamic Bayesian Networks." John Laird's talk on the SOAR architecture included a helpful definition of cognitive architectures (specific set of fixed mechanisms) vs. frameworks (neural nets, rule-based systems, etc.). There was an interesting mixture of people there (maybe 100 total?) from academia and AGI-based startup companies and organizations.
In order for the AGI field to move forward in a cohesive, organized way, it will be important to define standard evaluation metrics. (The conference session on creating an "AGI preschool" discussed this issue.) This seems to be one of the biggest hurdles in the near term. Communication among researchers is already difficult since nearly everyone uses a different set of terminology, but the lack of standard evaluation and comparison methods makes it even more difficult. Producing solid metrics might even be the most crucial step here... once you know what you're measuring, it's much easier to work towards that goal. However, general intelligence is really hard to measure, even in humans. The best starting point is probably to start from our current definitions of general intelligence, which are usually in the form of "an agent must be successful at achieving goals/maximizing rewards in a wide variety of environments." So I'm thinking that a good practical approach is the one described in a paper by Thomas Hazy, Michael Frank, & Randall O'Reilly (Towards an Executive Without a Homunculus: Computational Models of the Prefrontal Cortex/Basal Ganglia System): "To the extent that the same basic model can account for a progressively wider range of data, it provides confidence that the model is capturing some critical core elements of cognitive function." So we can build a standardized, ever-growing repository of small tasks, each with a clear measure of success/failure (either binary or scalar). Then we can subject our AGI systems to the entire test set and measure general intelligence performance as the fraction of tests passed. Our confidence that the metric is useful should then be proportional to the number and variety of tasks in the test set. I can't think of a better, simpler way to measure general intelligence than this.
At the conference I showed a live demo of my sensory cortex model learning from natural images, along with the following poster (full-sized image available from my website):
3 comments:
Nice poster!
Nice poster. Glad to see the Friston paper was useful.
Thanks guys. Jesse, thanks again for suggesting the Friston paper. It seems to provide a great theoretical framework for all kinds of cortical models.
Post a Comment