Abstract
A neural model for one- or few-trial irreversible behavior learning such as occurs in imprinting is introduced. It is assumed that synaptic connections in the relevant parts of the central nervous system are initially set up in a largely, but not totally random fashion, as a result, for instance, of differential cell–cell adhesion. The behavior to be learned is then sometimes exhibited, but not in a reproducible, mature way. During early neural activity, active postsynaptic neurons may, however, deliver a putative retrograde trophic factor to some of their afferent synaptic boutons. This is taken to occur according to a Hebb-type rule. At a later stage, only those synapses that have accumulated enough trophic factor are stabilized selectively. We show explicitly how this process may lead to a perfectly wired circuit. The calculations indicate that if the connections were relatively well defined from the beginning, then random pulses at the inputs suffice for this refinement process to take place. This is analogous to the maturation of neural circuits under spontaneous electrical activity (unsupervised learning). If the initial connections are “fuzzy,” however, well-defined patterns of activation are needed at the inputs so that selective stabilization leads to a correct functional system (the model now behaves in an instructionist mode). Experiments suggested by the model are discussed, and involve the manipulation of afferent inputs, of the initial synapse distribution, or of the stabilization phase.