The answer has to do with architecture. The reason intelligence is able to effect conscious control of events on short time scales is that it confers the ability to plan ahead, thus overcoming its own inherent neurobiological speed limits, by making predictions of future events based on information already received. Very roughly speaking, consciousness can stay one or more steps ahead of the situation by using the predictive machinery of intelligence and the storehouse of memory to set goals, and then modify or extend those goals based on what is actually happening.
This is most probably achieved through some biological analogue of an architectural property known to neural network enthusiasts as autoassociation, in which the full retrieval of a stored informational pattern (in this case a neurological representation) is triggered by the input of a partial but matching subset of the full pattern, given an appropriately configured network. This allows the complete perception of a learned pattern before it has finished unfolding in time or space, and it is why you can recognize partially occluded objects or anticipate the progress of familiar sequences. The cascading of this predictive effect upwards through the layers of representations (with each layer adding its own predictions) results in more and more sophisticated predictions spanning longer and longer time frames, ultimately pushing the operation of entire subsets of successful lower-level predictive representations below the threshold of awareness.
On a fundamental level, prediction is central to intelligence.
The clearest articulation of this predictive principle I have yet come across is the "memory-prediction" framework presented by Jeff Hawkins (video link courtesy zenpundit) in his 2004 book "On Intelligence," written with Sandra Blakeslee. "On Intelligence" provides a valuable overview of a number of topics related to the general problem of intelligence, including chapters on Artificial Intelligence, Neural Networks, and the Human Brain, in order to better argue the key roles of Memory and Prediction in what intelligence does, all on its way to expressing a more specific theory of how intelligence might actually work which is based on cortical architecture. While I am not yet well equipped to critique the specifics of his proposed theory, I find the resonances between his argument for intelligence-as-prediction and my extensive teaching, practicing, and performing experience as a professional cellist to be compelling.
From chapter 5 of "On Intelligence":
...[Y]our brain makes low-level sensory predictions about what it expects to see, hear, and feel at every given moment, and it does so in parallel. All regions of your neocortex are simultaneously trying to predict what their next experience will be. Visual areas make predictions about edges, shapes, objects, locations, and motions. Auditory areas make predictions about tones, direction to source, and patterns of sound. Somatosensory areas make predictions about touch, texture, contour, and temperature.
"Prediction" means that the neurons involved in sensing your door become active in advance of them actually receiving sensory input. When the sensory input does arrive, it is compared with what was expected. As you approach the door, your cortex is forming a slew of predictions based on past experience. As you reach out, it predicts what you will feel on your fingers, when you will feel the door, and at what angle your joints will be when you actually touch the door. As you start to push the door open, your cortex predicts how much resistance the door will offer and how it will sound. When your predictions are all met, you'll walk through the door without consciously knowing these predictions were verified. But if your expectations about the door are violated, the error will cause you to take notice. Correct predictions result in understanding. The door is normal. Incorrect predictions result in confusion and prompt you to pay attention...We are making continuous low-level predictions in parallel across all our senses.
The above excerpt emphasizes the way parallel predictions across sensory modalities can be combined into rich representations. Let me offer two loosely paraphrased examples to illustrate the way Hawkins believes serial combinations might work.
1) When you learned to read, you started (like all of us) by learning to recognize letters. Once your letter recognition models became sufficiently reliable, your brain was able to use their output as the basis for constructing new predictive models that could recognize entire words. This reliability is why it is no longer necessary for you to consciously process every single letter when you read, although you can, if you focus your attention appropriately. Extending the example, we can say that further modeling of phrase units, sentence structure, and the rules of grammar and composition are the subjects of yet higher layers. Your memory-predictive mastery of lower-level tasks such as letter and word recognition frees up enough processing power to employ higher-level representations, and your resulting knowledge of vocabulary and grammar actually allow you to predict what you will read before you finish each word or senten (see?)
2) The expression "muscle memory" is often used to describe deeply learned complex movements, especially those of musicians and other performing athletes. The speed and relaxed precision of their movements are due to the layers of highly accurate predictive neurological representations of body structure and function (I expect my elbow to be here when I do this), integrated with equally well developed predictive models of the execution of the task at hand, whether it's meeting a ball with a diving catch or playing a cello concerto from memory. The unconscious ease of a physical talent at work bespeaks entire subsets of accurate models pushed below the threshhold of consciousness, whether they were discovered quickly and "naturally" (a great definition of intuition!) or learned and refined more slowly and painfully through extended trial and error.
By now it should be clear that the notion of intelligence as prediction is powerfully illuminating. Before turning to the equally powerful role that analogy plays in dramatically extending the reach of the predictive representations of intelligence, I'd like to add more of my own thoughts about how it is that representations become reliable enough to build into layers in the first place. These "internalized concepts" will be the subject of the next post.
No comments:
Post a Comment