Thursday, July 13, 2006

Ethical Issues in Advanced Artificial Intelligence

Why should we study intelligence (either artificial or biological) and develop intelligent machines? What is the benefit of doing so? About a year ago I decided my answer to this question:

To the extent that scientific knowledge is intrinsically interesting to us, the knowledge of how our brains work is probably the most interesting topic to study. To the extent that technology is useful to us, intelligent machines are probably the most useful technology to develop.

I have been meaning to write up a more detailed justification of this answer, but now I don't have to, because I just read this great paper... "Ethical Issues in Advanced Artificial Intelligence," by Nick Bostrom. I think this paper provides a clear explanation for developing intelligent machines. It touches on all the important issues, in my opinion.

A few points really resonated with me. Here are some succinct excerpts:
  1. "Superintelligence may be the last invention humans ever need to make."
  2. "Artificial intellects need not have humanlike motives."
  3. "If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness."
The third point, I believe, adequately deals with the issue of intelligent machines going crazy and taking over the world. It's all about motivation. If a machine has no intrinsic motivation to harm anybody, it will not do so. There are some caveats to this, some of which were discussed in the paper. I don't think any of these are insurmountable. First, random exploration during development in an unpredictable world will inevitably cause damage to someone/something. I don't think this is a huge problem, as long as the machine is sufficiently contained during development. Second, a machine with a curiosity-driven motivation system will essentially be able to create arbitrary value systems over time. The solution to this is simply to scale the magnitude of any "curiosity rewards" to be smaller than a hard-wired reward for avoiding harm. Third, a machine that can change its own code might change its motivations into harmful ones. Again, hard-coding a pain signal for any code modifications would help combat the problem. Further, if any critical code or hardware is modified, the whole machine could shut itself down. Of course, malignant or careless programmers might build intelligent machines with harmful motivations, but this is a separate issue from statement #3 above.

No comments: