A key issue in the neurophysiology of cognition is the problem of sequential learning. Sequential learning refers to the ability to encode and represent the temporal order of discrete elements occurring in a sequence. We show that the short-term memory for a sequence of items can be implemented in an autoassociation neural network. Each item is one of the attractor states of the network. The autoassociation network is implemented at the level of integrate-and-fire neurons so that the contributions of different biophysical mechanisms to sequence learning can be investigated. It is shown that if it is a property of the synapses or neurons that support each attractor state that they adapt, then everytime the network is made quiescent (e.g., by inhibition), then the attractor state that emerges next is the next item in the sequence. We show with numerical simulations implementations of the mechanisms using (1) a sodium inactivation-based spike-frequency-adaptation mechanism, (2) a Ca2+-activated K+ current, and (3) short-term synaptic depression, with sequences of up to three items. The network does not need repeated training on a particular sequence and will repeat the items in the order that they were last presented. The time between the items in a sequence is not fixed, allowing the items to be read out as required over a period of up to many seconds. The network thus uses adaptation rather than associative synaptic modification to recall the order of the items in a recently presented sequence.