Bluebird, robin, woodpecker, hummingbird, quail... Sure, they're all of the avian persuasion with many hundreds of similar features, but if you had to tell me which bird has the red breast, all that knowledge about feet, shapes of beaks and wing size is more of a hindrance than a help. Indeed, to output the correct answer, all that other information must, at some level, be repressed, or inhibited.
An Interactive Activation and Competition (IAC) neural network seeks to model this kind of memory retrieval. It seeks to mimic a situation in which a great deal of information about some subject is known (one in which much of that information overlaps itself), and we want to extract a particular case, or instance, from that information by repressing all the other available information. IAC Models attempt to demonstrate how specific class of neural networks can be shown not only to do this quite well but even their errors are similar to human performance. What we are to infer from this conclusion is obvious, if not, strictly speaking, particularly logical (see The case for and against the IAC .).
Formally then, an IAC model is a network of neurons which contains competitive pools of mutually inhibitory units, (see figure 1: the IAC Model). One effect of this is that activation of a neuron in any pool, inhibits, or decreases the activation levels of any other neurons in that pool. In this manner, units within any particular pool are said to be in competition between each other. This is similar to some class situations in which the students are graded on a curve, or a tennis match: the success of any one comes at the expense of the other(s).
Perhaps the finer points of the IAC model are best illustrated by an example.
Suppose, we have two gangs: the Sharks and the Jets, ala West Side story. The members of these gangs have names and particular traits (age, level of education, marital status, and occupation; eg. Art: Jets: 40's: Junior High :single: Pusher, or Phil: Shark: 30's: College: Married: Pusher). Now we create a IAC network which corresponds to this. Within that network, the pools of inhibitory units correspond to the traits. For example, one pool would have in it all the different possibilities for marital status: single, married, or divorced. Obviously, being any one of these precludes the possibility of being one of the others: if you're married you're not single or divorced (at least in our simple model), and the pool reflects this by making the activation of any one of the units cause the inhibition of the others. The connections between the pools are regulated by one central pool of instances. In this pool, there is a unit for every instance of combinations the network has encountered. For example, one instant unit has strong excitatory connections to the units for the name, Art, the gang name, Jets, the age, 40's, etc.
The output of such a network is maddeningly close to human. One can input any one (or more) of the characteristics and the network will output the gang member or members who have those characteristics: just like people can come up with examples of individuals who share certain traits. Likewise, one can input the name of a gang member and the network will give out the characteristics of said member, and these will generally be correct: just like people can describe a person if given a name.
Interestingly, there are cases in which the network makes mistakes. In one such case, a gang member has traits virtually identical to a host of other members, with one noticeable exception. In this situation, unless one really emphasizes that difference, the network will tend to "assume" that the gang member is just like all the rest of his cohorts. This is similar to the kinds of stereotypes known to predominate human interaction. So even in its errors, the performance of the IAC model is hauntingly similar to our own.
For more detailed description of this and other examples of an IAC model in action consult Chapter 2 of Explorations in Parallel Distributed Processing: A handbook of Models, Programs, and Exercises by James L. McClelland and David E. Rumelhart.
Unfortunately, one of the inherent problems with any such model is its lack of generalizability to any known biological phenomenon. Although there are demonstratively large number of inhibitory neurons within the brain, no conclusive evidence exists which ties these neurons with the kinds of structures seen in the IAC model. Additionally, the similarity of the IAC model to actual human thinking does not necessitate that nature actually solves her retrieval problems in the same manner. Indeed a central tenant of the artificial intelligence movement is that the wiring is actually irrelevant to the software involved. While this can be disputed, the fact remains that, at present, very little biological evidence exists for the IAC model. While this will almost certainly change in the coming years, the best anyone can say at the moment is that the structure of neurons within our nervous system does seem to change and synaptic efficacy does appear to subsequently increase with use. What true effects this has on the complex neural networks of the brain are at present, unknown, just as the actual connections among the neurons are, for the most part, mysteries to be solved by future generations.
On the plus side, the IAC model does provide a possible explanation for how inhibitory effects could be involved in certain types of data retrieval and memory encoding. The IAC model does not contradict any currently known biological data or theories, and its performance is close enough to human performance as to warrant further investigation.
For more detailed descriptions and examples of an IAC model in action consult Chapter 2 of Explorations in Parallel Distributed Processing: A handbook of Models, Programs, and Exercises by James L. McClelland and David E. Rumelhart.
Back to Table of Contents.