28 January, 2003
AI text
Having pointed out what he sees as flaws in current methods of developing AI, Brooks goes on to explain how he thinks it should be done. He calls this concept "incremental intelligence".
When attempting to develop artificial intelligence, it's a good idea to look at natural intelligence, and see how it developed. And the important thing to note about the way natural intelligence developed is the timescales involved. It took evolution three billion years to go from single-celled organisms to the first fish: that's a significant increase in complexity, including movement, vision, and a whole bunch of survival instincts. It took only a quarter of a billion years -- one sixth of that time -- to go from the first fish to the first beings who were biologically human. That involved refinement of all our senses as well as more advanced limbs including our hands, bipedal movement and steroscopic vision or depth perception. From there, it took us 2 million years to invent agriculture, and once we'd done that it took us only 20,000 years to invent everything else we've ever come up with. Those last two jumps each took approximately one one-hundredth of the time the previous jump took. So given that things like vision, movement, and survival instincts took several orders of magnitude longer to develop than the ability to play chess, why is it that AI researchers regard chess as the hard problem?
The concept behind incremental intelligence is that intelligent behaviour can be produced by the combination of simpler systems. Human beings can be viewed as an "existence proof" that this method can be successful, as we are the result of small improvements slowly piled up on each other through evolution. The structure of our brains reflects this [[[[research required]]]] with the small [blah] layer at the center overlaid by the [hypothalmus?] and that in turn overlaid by the [cortex]. The idea is to develop fairly simple systems that perform simple tasks, but extremely well, in a realistic and complex real-world environment, and then to slowly combine these systems into a single unit, called a "Creature", such that the simple systems constructively influence each other to produce behaviour that is more intelligent than either simple system could be. He calls these simple systems layers.
Brooks places a lot of emphasis on the ability of the Creatures to function in the real world. This fits in with the evolutionary idea: if the creature cannot survive independently as a simple system, they would never have survived to develop a new layer of complexity. The requirements he places on the Creatures he develops are strict:
- It must be sane: the Creature must cope appropriately with changes in the world around it
- It must be effectively real-time: the creature must react in a timely fashion to changes going on around it
- It must be robust: no small change, or cumulative series of small changes, should "wreck" the creature. It should simply get progressively less effective, or modify its behaviour to maintain its efficiency, as a real creature would.
- It must be able to maintain multiple goals, and it should be able to adapt to its environment: it should be able to take advantage of circumstances which favour one goal over another, and persue that goal.
- Finally, the creature must do something: it should have some kind of purpose
He now considers how one would go about engineering something like a creature. Like any engineering project, it must be split into smaller tasks. But there are two ways to divide the task. The first is by function. This is the way traditional AI has attempted to do it, by creating a central module which handles all processing -- all "thought" -- and then connecting it to some input and output devices, which in general AI tends to abstract away, effectively saying "we'll get to that later". The second method, the kind used by incremental intelligence, is to divide by activity. Incremental intelligence produces layers, each of which performs a single basic function, and then combines them to operate in parallel, competing and influencing each other to produce the creature's overall behaviour.
Again, Brooks stresses the importance of real world testing. The creatures must be created in the real world and tested continuously in the real world during development. And simplifications during development can lead to a subtle dependence on that simplification, and that dependence even in one layer will then be picked up by all the other layers it influences, which can severely affect the performance of a creature.
This is a radically different approach to the central processing unit. That method, the functional method, is very focussed on representation: the world is broken down into representations, rules, choices, object and symbols, which are then applied and processed to make decisions and suggest action to be taken. In the incremental intelligence method, the method of breaking down by activity, there is no central representation of any kind, and no explicity representation of goals. There are multiple parallel layers, which decide the appropriateness of their own goals and follow them selfishly, relying on their interactions with other layers to produce behaviour which overall can be regarded as intelligent. The low-level systems provide quick and basic instincts, just as in the human body, while more complex interactions, which are slower, can produce more reasoned behaviour.
Again, this is very like the way biology does things, and it has a number of advantages. The first is stability. Multiple parallel systems are inherently more stable than a single central system. If conditions are enough to send a certain layer berserk, that will not necessarily bring down the whole creature. Also, it improves performance. In a central system, adding more rules or more goals inevitably complicates the processing of even the simplest tasks, slowing down the system as a whole, eventually to an unusable degree. For multiple parallel systems, adding more goals is simply a matter of adding more hardware, and since it runs in parallel it does not in any way affect the performance of the other layers, even if it does modify their inputs or outputs, as we will see in our examination of some of the creatures Brooks actually built.
[[ bit about minsky? ]]
Above all, there is no explicit representation of the world or of the intentions of the system. Each layer has its own minor goal, and the opportunistic combination of many goals suited to different situations produces behaviour that is appropriate to any situation. Brooks and his team think that complex behaviour may not be the result of complex creatures, but the result of quite simple creatures interacting with a complex world.