Home » Artificial Intelligence » A.I. and Bounded Optimality – a driving force for technological development

A.I. and Bounded Optimality – a driving force for technological development

Artificial Intelligence a modern approachMost people most of the time make decisions with little awareness of what they are doing. These decisions include driving on auto-pilot, brushing our teeth, etc. Often we are not ‘mindful’ in such circumstances. However most of our judgments and actions are appropriate, most of the time. But not always!

Whilst we meander along on autopilot, researchers in Artificial Intelligence seek to create human level intelligence for their machines, with some even speaking of human level consciousness as the goal in A.I. machines, and whilst others consider machines are still as mindless as toothpicks, researchers such as Professor Stuart Russell considers his own motivation for the study of A.I. and that of researchers in the field should be:

“To create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole.”

Professor Russell, the co-author of the seminal book in Artificial Intelligence with Peter Norving, has released a new paper, Rationality and Intelligence: A Brief Update, which describes his ‘informal conception of intelligence and reduces the gap between theory and practice,’ as well as describing ‘promising recent developments.’

Setting the A.I. scene

In his paper Russell provides a clear goal of early A.I. researchers as:

“The standard (early) conception of an AI system was as a sort of consultant: something that could be fed information and could then answer questions. The out-put of answers was not thought of as an action about which the AI system had a choice, any more than a calculator has a choice about what numbers to display on its screen given the sequence of keys pressed.”

To some extent a recent paper by Facebook Artificial Intelligence Researchers Jason Weston, Sumit Chopra and Antoine Bordes entitled “Memory Networks” demonstrates the concept:

Artificial Intelligent memory networks use a kind associative memory to store and retrieve internal representations of observations. An interesting aspect of Memory Networks is that they can learn simple forms of “common sense” by “observing” the description of events in a simulated world. The system is trained to answer questions about the state of the world after having been told a sequence of events happening in this world. The system automatically learns simple regularities in the world, such as when “Antoine picks up the bottle and walks into the kitchen with it, where does he take the bottle.” The answer could be “the bottle is going to be/will be in the kitchen.”

Here is an example of what the system can do. After having been trained, it was fed the following short story containing key events in JRR Tolkien’s Lord of the Rings:

Bilbo travelled to the cave.

Gollum dropped the ring there.

Bilbo took the ring.

Bilbo went back to the Shire.

Bilbo left the ring there.

Frodo got the ring.

Frodo journeyed to Mount-Doom.

Frodo dropped the ring there.

Sauron died.

Frodo went back to the Shire.

Bilbo travelled to the Grey-havens.

The End.

After seeing this text, the system was asked a few questions, to which it provided the following answers:

Q: Where is the ring?

A: Mount-Doom

Q: Where is Bilbo now?

A: Grey-havens

Q: Where is Frodo now?

A: Shire

Another example of neural-net+memory is the recent Google/Deep Mind paper, “Neural Turing Machine.” Although it’s quite a bit more complicated than Memory Networks, and has not been demonstrated (at least not in public) to work on tasks such as question/answering, however it is fair to ‘assume’ this is one of Google’s goals given their desire to create the Star Trek computer.

Beyond the Turing Test

Setting out his informal conception of intelligence and a definition of artificial intelligence Russell explains that:

“A definition of intelligence needs to be formal—a property of the system’s input, structure, and output—so that it can support analysis and synthesis. The Turing test does not meet this requirement.”

He further lays out the steps the A.I. research community has taken towards defining what machine artificial intelligence is (and by default is not).

Russell then sets out an update to the four areas he has previously outlined as being the core areas of rationality to discuss in order to create artificial intelligence (Russell 1997).

Despite previously given credit to Bounded Rationality, Russell has omitted this in favor of what he calls metalevel rationality. He previously alluded to Herb Simon’s work on Bounded Rationality as:

Rational historyBounded rationality. “Herbert Simon rejected the notion of perfect (or even approximately perfect) rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.” Simon wrote:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world, or even for a reasonable approximation to such objective rationality.

Simon suggested that bounded rationality works primarily by satisficing — that is, deliberating only long enough to come up with an answer that is “good enough.”

Herb Simon won the Nobel Prize in economics for this work. It appears to be a useful model of human behaviors in many cases. But Russell says it is not a formal specification for intelligent agents, however, because the definition of ‘good enough’ is not given by the theory. Furthermore, satisficing seems to be just one of a large range of methods used to cope with bounded resources.

The four areas Russell outlines in his new paper are:

  1. Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. He says that the calculations necessary to achieve perfect rationality in most environments are too time consuming so perfect rationality is not a realistic goal.
  2. Calculative rationality. Russell writes that a “calculatively rational agent eventually returns what would have been the rational choice… at the beginning of its deliberation.” This is an interesting property for a system to exhibit, but in most environments, the right answer at the wrong time is of no value. He explains that in practice, “A.l. system designers are forced to compromise on decision quality to obtain reasonable overall performance; unfortunately, the theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.”
  3. Metalevel rationality, (also called Type II rationality by I. J. Good who was Alan Turing’s long term collaborator) or the capacity to select the optimal combination of computation-sequence-plus-action, under the constraint that the action must be selected by the computation.
  4. Bounded optimality. Russell writes that: “A bounded optimal agent behaves as well as possible, given its computational resources. That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected utility of any other agent program running on the same machine.”

Of these four possibilities, Russell says: “bounded optimality seems to offer the best hope for a strong theoretical foundation for A.I.” It has the advantage of being possible to achieve: there is always at least one best program — something that perfect rationality lacks. Bounded optimal agents are actually useful in the real world, whereas calculatively rational agents usually are not, and satisficing agents might or might not be, depending on how ambitious they are. Russell writes — If a true science of intelligent agent design is to emerge, it will have to operate in the framework of bounded optimality:

“My work with Devika Subramanian placed the general idea of bounded optimality in a formal setting and derived the first rigorous results on bounded optimal programs (Russell and Subramanian, 1995). This required setting up completely specified relationships among agents, programs, machines, environments, and time. We found this to be a very valuable exercise in itself. For example, the informal notions of “real-time environments” and “deadlines” ended up with definitions rather different than those we had initially imagined. From this foundation, a very simple machine architecture was investigated in which the program consists of a collection of decision procedures with fixed execution time and decision quality.”

Professor Russell’s paper offers a very detailed analysis of A.I. work to date and the options in the near future. In a reminder to the A.I. community about the controls we will need to maintain over machines Russell is indicating that the concept of bounded optimality is proposed as a formal task for artificial intelligence research that is both well-defined and feasible. Bounded optimality specifies optimal programs rather than optimal actions. Actions are generated by programs and it is over programs that designers have control – for now!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: