Monday, July 31, 2006

Simulacra and Simulation Related to Online Virtual Worlds

From Simulacra and Simulation, by Jean Baudrillard, 1981, page 1:
"If once we were able to view the Borges fable in which the cartographers of the Empire draw up a map so detailed that it ends up covering the territory exactly (the decline of the Empire witnesses the fraying of this map, little by little, and its fall into ruins, though some shreds are still discernible in the deserts - the metaphysical beauty of this ruined abstraction testifying to a pride equal to the Empire and rotting like a carcass, returning to the substance of the soil, a bit as the double ends by being confused with the real through aging) - as the most beautiful allegory of simulation..."
This particular idea of simulation, a copy of some original thing which might acquire more attention than the original itself, makes me think of what Google Earth might turn into.
"...this fable has now come full circle for us, and possesses nothing but the discrete charm of second-order simulacra.

Today abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory - precession of simulacra - that engenders the territory, and if one must return to the fable, today it is the territory whose shreds slowly rot across the extent of the map. It is the real, and not the map, whose vestiges persist here and there in the deserts that are no longer those of the Empire, but ours. The desert of the real itself."
The simulacrum concept (a copy with no original), however, appears to be more similar to our fantasy virtual worlds: Second Life, World of Warcraft, There, Project Entropia.

The major distinction between these two types of online worlds (that one is based on our physical world and the other is totally fictional) might become more pronounced in the future as these places gain in popularity. Each type has its benefits: one is instantly familiar, the other allows more creative freedom. Both seem to have their own place in the foreseeable future. If people someday spend so much time in virtual worlds that the real one is no longer familiar, the Earth-simulation type might fade away.

Thursday, July 13, 2006

Ethical Issues in Advanced Artificial Intelligence

Why should we study intelligence (either artificial or biological) and develop intelligent machines? What is the benefit of doing so? About a year ago I decided my answer to this question:

To the extent that scientific knowledge is intrinsically interesting to us, the knowledge of how our brains work is probably the most interesting topic to study. To the extent that technology is useful to us, intelligent machines are probably the most useful technology to develop.

I have been meaning to write up a more detailed justification of this answer, but now I don't have to, because I just read this great paper... "Ethical Issues in Advanced Artificial Intelligence," by Nick Bostrom. I think this paper provides a clear explanation for developing intelligent machines. It touches on all the important issues, in my opinion.

A few points really resonated with me. Here are some succinct excerpts:
  1. "Superintelligence may be the last invention humans ever need to make."
  2. "Artificial intellects need not have humanlike motives."
  3. "If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness."
The third point, I believe, adequately deals with the issue of intelligent machines going crazy and taking over the world. It's all about motivation. If a machine has no intrinsic motivation to harm anybody, it will not do so. There are some caveats to this, some of which were discussed in the paper. I don't think any of these are insurmountable. First, random exploration during development in an unpredictable world will inevitably cause damage to someone/something. I don't think this is a huge problem, as long as the machine is sufficiently contained during development. Second, a machine with a curiosity-driven motivation system will essentially be able to create arbitrary value systems over time. The solution to this is simply to scale the magnitude of any "curiosity rewards" to be smaller than a hard-wired reward for avoiding harm. Third, a machine that can change its own code might change its motivations into harmful ones. Again, hard-coding a pain signal for any code modifications would help combat the problem. Further, if any critical code or hardware is modified, the whole machine could shut itself down. Of course, malignant or careless programmers might build intelligent machines with harmful motivations, but this is a separate issue from statement #3 above.

Tuesday, July 11, 2006

July Update

I've been pretty busy with my internship this summer, so I haven't done much work with Verve. It's probably best that I wait till I'm done at IBM, anyway, to avoid any conflicts of interest. The internship itself is going well. I'll post more details when it's over... maybe after we've submitted some things to be published.

Next week is the AAAI 2006 conference in Boston. I'll be there to present a poster in the student abstract program. My abstract is entitled, "Curiosity-Driven Exploration with Planning Trajectories." More details are here.