The mechanisms of eye-movement control are among the best understood in motor neurophysiology. Detailed anatomical and physiological data have paved the way for theoretical models that have unified existing knowledge and suggested further experiments. These models have generally taken the form of black-box diagrams (for example, Robinson 1981) representing the flow of hypothetical signals between idealized signal-processing blocks. They approximate overall oculomotor behavior but indicate little about how real eye-movement signals would be carried and processed by real neural networks. Neurons that combine and transmit oculomotor signals, such as those in the vestibular nucleus (VN), actually do so in a diverse, seemingly random way that would be impossible to predict from a block diagram. The purpose of this study is to use a neural-network learning scheme (Rumelhart et al. 1986) to construct parallel, distributed models of the vestibulo-oculomotor system that simulate the diversity of responses recorded experimentally from VN neurons.