Home » Artificial Intelligence » Artificial Intelligence raises deeper questions than nuclear weapons technology

Artificial Intelligence raises deeper questions than nuclear weapons technology

With so much press around Google’s acquisition of DeepMind (which I wrote about here and here) and the establishment of an ethics board (a good thing in my opinion). I thought I would highlight some text from one of the dominant textbooks in the field of Artificial Intelligence; AI: A Modern Approach. The book is apparently used in 1200 universities, and is currently the 22nd most-cited publication in computer science and 4th most cited publication of the 21st century.

The authors, Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, Adjunct Professor of Neurological Surgery, and Peter Norvig (director of Research at Google) devote significant space to A.I. dangers and Friendly A.I., “The Ethics and Risks of Developing Artificial Intelligence.”

What initially got my attention whilst reading this chapter was this statement: “AI raises deeper questions than, say, nuclear weapons technology.”

The authors continue outlining various risks. The first 5 risks they discuss are:

  • People might lose their jobs to automation.
  • People might have too much (or too little) leisure time.
  • People might lose their sense of being unique.
  • AI systems might be used toward undesirable ends.
  • The use of A.I. systems might result in a loss of accountability.

The last subset listed above indicates: “The Success of AI might mean the end of the human race.” Below is an extract:

The question is whether an A.I. system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example…a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions.

Second, specifying the right utility function for an A.I. system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering…

Third, the A.I. system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth.

They then write:

I.J. Good wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

The authors also reference that in Computer Power and Human Reason, Joseph Weizenbaum argued that the effect of intelligent machines on human society will be such that continued work on artificial intelligence is perhaps unethical.

Norvig and Russell do leave us with much to think about:

Looking on the bright side, success in AI would provide great opportunities for improving the material circumstances of human life. Whether it would improve the quality of life is an open question. Will intelligent automation give people more fulfilling work and more relaxing leisure time? Or will the pressures of competing in a nanosecond-paced world lead to more stress? Will children gain from instant access to intelligent tutors, multimedia online encyclopedias, and global communication, or will they play ever more realistic war games? Will intelligent machines extend the power of the individual, or of centralized governments and corporations?

The Founders Fund, which was one of the backers of DeepMind has written:

“While we have the computational power to support many versions of AI, the field remains relatively poorly funded, a surprising result given that the development of powerful AIs (even if they aren’t general AIs) would probably be one of the most important and lucrative technological advances in history.”

It is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization.


1 Comment

  1. Colin Lewis says:

    Stam Nicolis on Google+

    People WILL, not might, lose their jobs to automation-the issue is what will happen to them, whether they will be able to find other jobs, since the old jobs will no longer be available. Any system can be used to “undesirable ends” (undesirable for whom, however?). Any system can lead to loss of accountability. So it’s misleading to focus on AI. These issues are relevant beyond AI.

    On the other hand, presenting as lead slogan, “the success of AI might mean the end of the human race” undermines any credibility the arguments on the other issues may have, since it is so obviously grotesque.

    I’d suggest Feynman’s “The meaning of it all” lectures and “Lectures on computation” for some serious thoughts on the subject that are, still, topical.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: